Architecture Portfolio · 2025–2026

The enterprise
that runs itself
reasons, acts, and accounts
for itself.

An explainability-first, human-supervised enterprise AI architecture — designed for regulated industries, from the business requirements layer through to production infrastructure. Six design phases. Eight intelligent modules. One coherent system.

6
Design Phases
Requirements → TOGAF → FRD/PRD → Agents → ML → Infrastructure
8
Intelligent Modules
Each with its own agent, ML model, XAI layer, and HITL checkpoint
3
Architecture Layers
Presentation · Agent & ML Platform · Infrastructure & Governance
0
Black-Box Decisions
Every ML inference is explainable. Every agent action is auditable.
Scroll to explore
EU AI Act Compliant TOGAF 10 Google Cloud · Vertex AI ADK Multi-Agent XAI / SHAP Medical Devices · ISO 13485
01
Why Now
The architectural conditions for enterprise AI at scale
are now in place.

Three things had to be true simultaneously for an autonomous enterprise to be architecturally viable: the regulatory framework had to be clear enough to design against, the tooling had to be mature enough to build production systems with, and the data infrastructure had to be fast enough to act on in real time.

In 2024–25, all three converged. The Autonomous Enterprise is a response to that convergence — a complete architectural design that takes each of the four enabling factors below and expresses it as a concrete engineering decision.

FORCE — 01
Regulatory Clarity Arrived
The EU AI Act entered enforcement in August 2024. High-risk AI systems — including anything that touches medical devices, credit, or employment — now require explainability, human oversight, and documented architecture by law. Compliance is no longer optional, and it's no longer vague. This is the forcing function that turns "nice to have" XAI into a contractual necessity.
EU AI Act · Aug 2024
FORCE — 02
LLMs Reached Enterprise Grade
Gemini 1.5 Pro's 1M token context window means an LLM can now read an entire contract portfolio, a full Q2C history, or a decade of asset maintenance logs in a single pass — and reason across them. This isn't incremental. It collapses the boundary between structured enterprise data and unstructured documents that has blocked enterprise AI adoption for fifteen years.
Context Window · 2024
FORCE — 03
Agentic Tooling Matured
Google's Agent Development Kit (ADK), the Model Context Protocol (MCP), and the Agent-to-Agent (A2A) communication protocol gave the industry its first production-grade framework for multi-agent systems. For the first time, you can design a swarm of specialist agents with formal state machines, tool contracts, and audit trails — without building the orchestration layer from scratch.
ADK / MCP / A2A · 2025
FORCE — 04
Enterprise Data Fabric Is Ready
BigQuery, Vertex AI Feature Store, Pub/Sub, and Firestore now compose into a real-time enterprise data fabric that didn't exist in its current form three years ago. The gap between an event in the field — a sensor anomaly on an MRI unit, a contract clause triggering a price adjustment — and an AI-driven response has collapsed from days to seconds.
GCP Data Platform · 2024–25
02
The Anchor Problem
Anchor Client · Medical Imaging OEM
ClaraVis Medical Systems
Munich, Germany · €1.2B Revenue · 4,200 Employees
MRI & CT Imaging Portfolio · Installed Base: 12,000+ Units · 34 Countries
47
Days avg. CPQ cycle
€40M
Annual warranty over-reserve
3.2×
Reactive vs predictive maint. cost
9
Stakeholders per deal
PAIN 01
Manual Quote-to-Cash Across 9 Silos
Every MRI configuration requires sequential handoffs between Sales, Applications Engineering, Legal, Service, Finance, Revenue Recognition, Logistics, and Post-Sales. Each handoff is a human, an email, and a 3–5 day delay. The system wasn't designed — it accumulated. Nobody owns the end-to-end, so nobody can fix it.
Target with AE: CPQ cycle under 9 days. Revenue close reduced by 31 days.
PAIN 02
EU AI Act Exposure on Every ML Decision
ClaraVis's existing data science team built three predictive models — revenue recognition, asset failure prediction, and contract risk scoring — all of which are classified as high-risk under EU AI Act Annex III. None of them currently produce a human-reviewable explanation. None have a documented human oversight checkpoint. A compliance audit in Q3 2025 flagged all three.
The AE satisfies Article 14 by making human oversight a state machine node. Every high-risk inference routes through a named approver — with a SHAP explanation and confidence score presented — before any write operation commits.
PAIN 03
12,000 Units. No Unified Telemetry.
Installed MRI units send DICOM service events, error codes, and utilisation data to 6 different regional systems with no common schema. Predictive maintenance is impossible. Field service is reactive. The €40M warranty reserve exists because nobody can predict failures — so finance provisions for the worst case every quarter.
Unified asset event pipeline → RUL prediction model → 40% reduction in unplanned downtime.
03
Design Philosophy

Four principles.
Non-negotiable in a regulated context.

These aren't best-practice guidelines. In the EU AI Act and FDA regulatory environment, they are architectural constraints. Every component of the AE must satisfy all four.

I
PRINCIPLE — 01
Explainability engineered in — from model design to audit trail
XAI is not a dashboard you add after the model ships. Every ML model in this system is designed with its explanation contract upfront — before a single line of training code is written. The explanation must be human-readable, must identify the top features driving each decision, and must be written to the audit log before any downstream action is taken. SHAP values are generated at inference time, not retrospectively. Model Cards document intended use, known limitations, and bias analysis — and they are versioned alongside the model in the registry.
In practice: When the revenue recognition model classifies a ClaraVis MRI transaction as a lease vs a sale, the Finance Controller sees: the top 5 contract features that drove the classification, the confidence score, and a one-click override with mandatory reason code before the entry posts to the GL.
II
PRINCIPLE — 02
Human oversight is a first-class state machine node, specified before implementation
EU AI Act Article 14 defines meaningful human oversight as a designed mechanism — a specific point in the decision flow where a named human reviews the agent's reasoning, the SHAP explanation, the confidence score, and chooses to approve, reject, or escalate. In this architecture, every HITL checkpoint is a formal state in the agent's state machine, with a defined entry condition, a presentation contract, a decision interface, a timeout behaviour, and an immutable audit record written before the agent proceeds.
In practice: The Contract Guard agent autonomously extracts and classifies 200+ clause types. When it flags a liability cap as non-standard, it pauses, surfaces the clause with its risk score and similar past precedents, and waits for a Legal reviewer to approve before any counter-proposal is drafted.
III
PRINCIPLE — 03
Compliance obligations encoded as write-path constraints, satisfied continuously
Most enterprise compliance tools are forensic — they tell you what happened after the fact. This architecture makes compliance a live property of every transaction. ASC 606 revenue recognition rules, EU AI Act Article 13 transparency requirements, and ISO 13485 device record requirements are encoded as immutable constraints in the data model and enforced by the write path — not checked by a monthly report. Every event is tagged with the regulatory obligation it satisfies at the time of writing. The compliance audit trail is a by-product of normal operations, not a separate process.
In practice: Every ClaraVis MRI device shipment event writes simultaneously to the asset register, the revenue recognition pipeline, and the ISO 13485 device history record — in one atomic transaction with a single regulatory tag set.
IV
PRINCIPLE — 04
Augment the enterprise — never replace its judgment
The agent swarm does not run the enterprise. It runs the work the enterprise shouldn't be spending human time on: extracting data, transforming it, routing it, flagging anomalies, generating options, and preparing decisions for human review. Every module has a defined autonomy boundary — a set of actions below a risk threshold it can execute without asking, and a set above the threshold where it prepares the best possible brief for a human and waits. Replacing human judgment in a medical device company is not the goal. Making it faster, better-informed, and fully documented is.
In practice: The CCAI Sales Agent handles the first 11 turns of an inbound MRI inquiry — qualification, product fit, configuration options, pricing estimate — fully autonomously. Turn 12, where deal-specific commercial terms are discussed, automatically escalates to a Senior Account Executive with a full briefing document the agent prepared.
04
The Architecture

Three layers.
One coherent contract.

Each layer has a single responsibility and a clean interface to the layer above and below it. XAI outputs and HITL checkpoints flow upward from the ML layer to the Presentation layer. Governance and audit constraints flow downward from policy into the Infrastructure layer. Nothing bypasses a layer. Nothing is ad-hoc.

This is a concept overview. The full technical design — GCP reference architecture, Terraform IaC, ADK agent topology, and Vertex AI pipeline specs — is developed in Phase 2 (TOGAF D) and Phase 6 of the design process.

The three-layer model is deliberately borrowed from classic enterprise architecture thinking — but updated for the agentic era. Layer 1 is what users see and interact with, including the HITL approval surfaces. Layer 2 is where intelligence lives: the agent swarm, the ML models, the XAI pipeline, the event bus. Layer 3 is where trust is enforced: zero-trust networking, encrypted storage, IAM, immutable audit logs, and the IaC that makes all of it reproducible and auditable.

The insight is that in a regulated enterprise, trust cannot be a property of the application — it must be a property of the infrastructure. If the infrastructure doesn't enforce it, any application can violate it. Layer 3 makes compliance physically un-bypassable.

LAYER 01
Presentation & Experience
Portfolio Site 8 App Dashboards HITL Approval UI XAI Explanation Viewer Architecture Explorer Audit Trail Dashboard React · TypeScript
↕   REST / gRPC / Pub/Sub events
LAYER 02
Agent & ML Platform
ADK Multi-Agent Swarm CCAI Sales Agent Vertex AI Pipelines Feature Store SHAP · XAI Layer Model Registry A2A Protocol MCP Tool Manifest Pub/Sub Orchestration Drift Detection
↕   VPC-native · CMEK-encrypted · IAM-bound
LAYER 03
Infrastructure & Governance
Terraform IaC GKE · Cloud Run VPC-SC Zero-Trust CMEK Encryption BeyondCorp IAM · Workload Identity Immutable Audit Log Cloud Build CI/CD GreenOps Scheduling FinOps Cost Tags
Six things the architecture
delivers — by design.
Every architecture decision in this portfolio is production-grade. Every design artifact traces to a real enterprise requirement. The ClaraVis scenario is a representative composite of patterns from real regulated-industry AI deployments.

Each capability below is a structural property of the design — an outcome of the architectural decisions made across the six phases, expressed in component specifications, and enforced by the infrastructure.
A formally specified multi-agent system
Agent topology defined using Google ADK. Each agent carries a state machine specification, a tool manifest, a confidence threshold, a documented autonomy boundary, and a circuit-breaker configuration. Every component traces to a named GCP service or ADK construct.
An executable architecture
Every component in every layer maps to a named GCP service, a Terraform resource, or an ADK agent definition. Architecture Decision Records document each significant choice alongside the alternatives considered. The architecture is buildable directly from its design artifacts.
Compliance as a structural property
EU AI Act, FDA 21 CFR 820, and ASC 606 obligations are encoded as write-path constraints. A compliance audit reads the operational log — there is no separate compliance database and no remediation process. Compliance is continuous and automatic.
Full enterprise scope on a common data fabric
Sales, contracting, asset management, revenue recognition, risk monitoring, and infrastructure governance — eight modules sharing a common Pub/Sub event fabric, a common Vertex AI Feature Store, a common XAI contract, and a common HITL specification.
Domain-specific ML patterns
ASC 606 hybrid recognition rules encoded as classification logic. CQRS event sourcing for immutable audit trails. Two-tier predictive maintenance — RUL prediction at fleet level, anomaly detection at unit level. SHAP explanation contract per model, specified before training begins.
Documented tradeoffs at every decision point
Every Architecture Decision Record states what was selected, what was considered, and the technical reasoning: Firestore over Spanner for agent state, Cloud Run over GKE for stateless modules, SHAP over LIME for the XAI layer. The reasoning is the design.
05
Regulatory Grounding

Compliance is not
a layer. It is the foundation.

Each regulation below imposes specific architectural constraints — not just documentation requirements. The design satisfies them structurally, not through post-hoc reporting.

Regulation
Architectural Constraint It Imposes on the AE
Risk Level
EU AI Act — Annex III
High-Risk AI Systems
Every ML inference must produce a human-readable explanation before any write operation. Named human approver required for high-risk decisions. Immutable audit trail mandatory. Risk management system documented and versioned.
High Risk
FDA 21 CFR 820
Quality System Regulation
Device History Records must be created atomically with each shipment event. Software validation records required for any software affecting device safety. Change control enforced at the infrastructure layer via Terraform state management.
High Risk
ISO 13485 : 2016
Medical Devices QMS
Traceability from customer order through to installed device, service history, and decommissioning. Every module writes to a shared Device Master Record. Post-market surveillance data from field telemetry feeds back into risk model retraining pipeline.
Moderate
ASC 606 / IFRS 15
Revenue Recognition
Recognition rules encoded as immutable constraints in the write path — applied at transaction time, before any downstream posting. Each transaction tagged with the performance obligation it satisfies at write time. Journal entries require an approved HITL record. Post-hoc reclassification requires a full audit trail.
Moderate
GDPR / EU DPDP
Data Protection
All PII and PHI confined within VPC-SC perimeter. CMEK encryption — ClaraVis holds the keys, not Google. Data residency enforced by GCP region constraints in Terraform. Right-to-erasure handled via Firestore document-level deletion with audit record preservation.
Native to Infra
ISO 27001
Information Security
BeyondCorp zero-trust: no implicit network trust. All service-to-service calls via Workload Identity. Secrets managed exclusively through Secret Manager — no hardcoded credentials anywhere in the codebase or IaC. Security posture continuously validated by Security Command Center.
Native to Infra
Continue the Architecture
The concept is clear.
Now comes the design.
The following pages take every principle above and express it as concrete architecture artifacts — buyer requirements, TOGAF phases, agent topology, ML pipeline design, and infrastructure code. Each page is independently readable. Together, they form a complete AI Solutions Architecture portfolio.
PG 02
ClaraVis — Client Brief & Requirements
BRD · Stakeholder Map · AI Readiness Audit · Use Case Catalogue
In Design
PG 03
TOGAF ADM — Phases A through F
Architecture Vision · Business · Data/App · Technology · Migration
In Design
PG 04
Agent Swarm Architecture
ADK · A2A · MCP · State Machines · Guardrails
Coming
PG 05
ML Engineering & MLOps
Feature Store · Model Cards · XAI · Vertex Pipelines · Drift Detection
Coming
PG 06
Infrastructure & GCP Architecture
Terraform · VPC-SC · GKE · CI/CD · FinOps · GreenOps
Coming
PG 07
The AE Suite — 8 Intelligent Modules
ContractGuard · FinRisk · RevRec · AssetIQ · GreenOps · and more
Coming