Architecture Portfolio · 2025–2026
An explainability-first, EU AI Act-compliant P2P architecture — designed for regulated enterprises. Six process modules. One agent swarm. Zero unaudited decisions.
Scroll to explore
Three things had to be true simultaneously for an autonomous procurement system to be architecturally viable: the regulatory framework had to be clear enough to design against, the agentic tooling had to be mature enough to execute multi-step procurement workflows, and the data infrastructure had to be fast enough to act on in real time.
In 2024–25, all three converged. The Autonomous Buyer is a response to that convergence — a complete architectural design that takes each of the four enabling factors below and expresses it as a concrete engineering decision.
The EU AI Act entered enforcement in August 2024. AI systems used in procurement decisions — supplier scoring, contract risk assessment, spend anomaly detection — that materially affect business relationships are subject to transparency and oversight requirements. Compliance is no longer optional. This is the forcing function that turns "nice to have" HITL into a contractual necessity, and turns every autonomous procurement action into a decision that must be explainable before it executes.
Gemini 1.5 Pro's 1M token context window means an LLM can now read an entire supplier catalogue, a full RFx response portfolio, or a decade of contract history in a single pass — and reason across them. This collapses the boundary between structured ERP data and unstructured procurement documents — SOWs, framework agreements, supplier questionnaires — that has blocked intelligent procurement automation for fifteen years.
Google's Agent Development Kit (ADK), the Model Context Protocol (MCP), and the Agent-to-Agent (A2A) communication protocol gave the industry its first production-grade framework for multi-agent systems. For the first time, you can design a swarm of specialist procurement agents — Sourcing, Contract, PO, Match, Payment, Supplier Risk — each with formal state machines, tool contracts, and audit trails, without building the orchestration layer from scratch.
BigQuery, Vertex AI Feature Store, Pub/Sub, and Firestore now compose into a real-time enterprise data fabric that didn't exist in its current form three years ago. The gap between a procurement event — a goods receipt hitting the warehouse, a supplier invoice arriving — and an AI-driven 3-way match and payment instruction has collapsed from days to seconds. Tail spend is no longer invisible; it's a query.
Every ClaraVis procurement event — from a component requisition in Munich to a service contract in Singapore — requires sequential handoffs between Procurement, Legal, Finance, Logistics, Accounts Payable, and Compliance. Each handoff is a human, an email, and a 2–4 day delay. SAP Ariba handles strategic sourcing. SAP S/4HANA runs the ERP. A legacy AP platform manages invoices. None of them share a live event stream. The 3-way match is done manually in spreadsheets by a team of six.
Target with AB: P2P cycle under 3 days. Invoice processing cost reduced by 78%. 3-way match straight-through rate above 90%.
ClaraVis's procurement analytics team built two predictive models — supplier risk scoring and spend anomaly detection — both of which influence supplier selection and payment decisions. Under EU AI Act Annex III and the GDPR's automated decision-making provisions (Article 22), these models require explainability, human oversight, and documented architecture. Neither model currently produces a human-reviewable explanation. Neither has a formal HITL checkpoint. A supplier flagged by the risk model and delisted had no recourse — no explanation, no appeal path.
The AB satisfies Article 14 by making human oversight a state machine node. Every high-risk supplier scoring inference routes through a named Procurement Manager — with a SHAP explanation and confidence score — before any supplier status change commits.
ClaraVis's supplier base spans 34 countries. The EU's Corporate Sustainability Reporting Directive (CSRD), the German Supply Chain Due Diligence Act (LkSG), and the EU Deforestation Regulation (EUDR) impose specific obligations to monitor and document Tier-1 and Tier-2 supplier compliance. Today this is done manually by a two-person team using spreadsheets and annual questionnaires. 63% of suppliers have not been assessed in the last 18 months. The regulatory exposure is unquantified.
Automated supplier compliance monitoring → continuous CSRD/LkSG scoring → HITL escalation on threshold breach → full audit trail per supplier per regulation.
These aren't best-practice guidelines. In the EU AI Act, GDPR, and supply chain regulatory environment, they are architectural constraints. Every component of the Autonomous Buyer must satisfy all four.
XAI is not a dashboard you add after the model ships. Every ML model in this system — supplier risk scoring, spend anomaly detection, 3-way match exception classification — is designed with its explanation contract upfront, before a single line of training code is written. SHAP values are generated at inference time, not retrospectively. Every procurement decision that touches a supplier relationship or a payment writes its explanation to the immutable audit log before any downstream action executes.
EU AI Act Article 14 defines meaningful human oversight as a designed mechanism — a specific point in the decision flow where a named human reviews the agent's reasoning, the SHAP explanation, and the confidence score, then chooses to approve, reject, or escalate. In this architecture, every HITL checkpoint is a formal state in the agent's state machine, with a defined entry condition, a presentation contract, a decision interface, a timeout behaviour, and an immutable audit record written before the agent proceeds.
Most enterprise compliance tools are forensic — they tell you what happened after the fact. This architecture makes compliance a live property of every procurement transaction. CSRD supplier due diligence requirements, LkSG documentation obligations, GDPR automated-decision constraints, and EU AI Act Article 13 transparency requirements are encoded as immutable constraints in the data model and enforced by the write path — not checked by a quarterly report. The compliance audit trail is a by-product of normal operations, not a separate process.
The agent swarm does not run procurement. It runs the work procurement shouldn't be spending human time on: extracting clause types, classifying spend categories, matching invoices, routing exceptions, scoring suppliers, and preparing decisions for human review. Every module has a defined autonomy boundary — a set of actions below a risk threshold it can execute without asking, and a set above the threshold where it prepares the best possible brief for a human and waits.
Each layer has a single responsibility and a clean interface to the layer above and below it. XAI outputs and HITL checkpoints flow upward from the ML layer to the Presentation layer. Governance and audit constraints flow downward from policy into the Infrastructure layer. Nothing bypasses a layer. Nothing is ad-hoc.
The Agent layer and the ML layer are deliberately separated — agents carry state and orchestrate decisions; ML models produce inferences and explanations. Keeping them distinct makes each independently testable, deployable, and auditable. Layer 4 makes compliance physically un-bypassable.
↕ REST / gRPC / Pub/Sub events
↕ gRPC inference calls · feature store reads · SHAP explanation writes
↕ VPC-native · CMEK-encrypted · IAM-bound
Every architecture decision in this portfolio is production-grade. Every design artifact traces to a real enterprise requirement. The ClaraVis scenario is a representative composite of patterns from real regulated-industry procurement deployments.
Each capability below is a structural property of the design — an outcome of the architectural decisions made across the six modules, expressed in component specifications, and enforced by the infrastructure.
Agent topology defined using Google ADK. Each agent — Sourcing, Contract, PO, Match, Payment, Supplier Risk — carries a state machine specification, a tool manifest, a confidence threshold, a documented autonomy boundary, and a circuit-breaker configuration. Every component traces to a named GCP service or ADK construct.
Every component in every layer maps to a named GCP service, a Terraform resource, or an ADK agent definition. Architecture Decision Records document each significant choice alongside the alternatives considered. The architecture is buildable directly from its design artifacts.
EU AI Act, GDPR Article 22, CSRD, LkSG, and ISO 20400 obligations are encoded as write-path constraints. A compliance audit reads the operational log — there is no separate compliance database and no remediation process. Compliance is continuous and automatic.
Sourcing, contracting, purchase orders, 3-way match, invoice payment, and supplier performance — six modules sharing a common Pub/Sub event fabric, a common Vertex AI Feature Store, a common XAI contract, and a common HITL specification.
Supplier risk scoring with SHAP explanation contract. Spend anomaly detection using isolation forest with confidence-threshold HITL escalation. 3-way match exception classification with tolerance-band logic. Every model card specified before training begins.
Every Architecture Decision Record states what was selected, what was considered, and the technical reasoning: Firestore over Spanner for agent state, Cloud Run over GKE for stateless match modules, SHAP over LIME for the XAI layer. The reasoning is the design.
Each regulation below imposes specific architectural constraints — not just documentation requirements. The design satisfies them structurally, not through post-hoc reporting.
| Regulation | Architectural Constraint It Imposes on the AB | Risk Level |
|---|---|---|
EU AI Act — Annex III High-Risk AI Systems |
Every ML inference affecting a supplier relationship or a payment must produce a human-readable SHAP explanation before any write operation. Named human approver required for high-risk supplier scoring decisions. Immutable audit trail mandatory. Risk management system documented and versioned. | High Risk |
GDPR — Article 22 Automated Decision-Making |
No fully automated decision that produces a legal or similarly significant effect on a supplier without a human in the loop. HITL checkpoint is architecturally mandatory for any supplier status change, delistment, or payment block. Supplier has right to explanation — satisfaction is built into the audit log, not a separate process. | High Risk |
CSRD / LkSG / EUDR Supply Chain Due Diligence |
Supplier due diligence obligations encoded as continuous monitoring constraints, not annual questionnaires. Every supplier onboarding and periodic review event writes to the CSRD compliance register atomically. LkSG risk assessments triggered automatically on threshold breach. EUDR deforestation checks built into the Sourcing Agent's tool manifest. | Moderate |
ISO 20400 : 2017 Sustainable Procurement |
Sustainability criteria encoded as first-class scoring dimensions in the Sourcing Agent's RFx evaluation logic. ESG supplier scores versioned alongside financial scores in the Feature Store. GreenOps scheduling aligns batch procurement workloads with low-carbon compute windows. | Moderate |
GDPR / EU DPDP Data Protection |
All supplier PII confined within VPC-SC perimeter. CMEK encryption — ClaraVis holds the keys, not Google. Data residency enforced by GCP region constraints in Terraform. Right-to-erasure handled via Firestore document-level deletion with audit record preservation. | Native to Infra |
ISO 27001 Information Security |
BeyondCorp zero-trust: no implicit network trust. All service-to-service calls via Workload Identity. Secrets managed exclusively through Secret Manager — no hardcoded credentials anywhere in the codebase or IaC. Security posture continuously validated by Security Command Center. | Native to Infra |
The following pages take every principle above and express it as concrete architecture artifacts — agent state machines, ML pipeline designs, workflow simulations, and module-level technical specifications. Each page is independently readable. Together, they form a complete AI Solutions Architecture for autonomous P2P.