The Autonomous Enterprise / Page 04

Delivery & Product Design
SAFe Solution Train · Personas · FRD · HITL Specification

Two questions answered on one page. How does the AE get delivered across teams — the SAFe delivery governance layer. And what does each team actually build — the product design layer that bridges architecture to implementation.

SAFe 6.0 · Solution Train 4 Agile Release Trains 5 Buyer Personas 8 Functional Requirements 11 HITL Checkpoints
SAFe 6.0 · Solution Train

TOGAF defines what to build.
SAFe defines how teams build it.

The TOGAF ADM produced the architecture. The SAFe Solution Train is the delivery governance model that organises the teams that implement it. They are complementary frameworks operating at different altitudes — architecture defines the target state, SAFe defines the cadence, coordination, and cross-team dependency management that gets the enterprise there.

TOGAF ADM
Architecture layer
Defines the target state — what systems, what data model, what technology choices, what principles. Produces Phase D artifacts that become the architecture runway for each ART.
SAFe SOLUTION TRAIN
Delivery layer
Organises four ARTs across the capability domains. Coordinates cross-cutting enablers. Manages dependencies across team boundaries. Aligns delivery cadence to migration horizons from TOGAF Phase F.
ARCHITECTURE RUNWAY
The connection
TOGAF Phase D artifacts — GCP reference architecture, canonical data model, ADRs — are delivered as architecture runway features in the Platform ART backlog. Each ART pulls from the runway as needed, eliminating architecture-as-blocker.
ART Topology

Four Agile Release Trains.
One Solution Train.

Each ART owns a capability domain from the Phase B Business Architecture. The Platform ART is the foundation — all other ARTs depend on its shared enablers. Cross-cutting concerns (HITL, XAI, data fabric, security) are explicit features in the Platform ART backlog, not implicit assumptions in each domain ART.

Commercial ART
Quote-to-Cash
CCAI Sales Agent
Qualification · Config · CPQ · Escalation
Horizon 3
ContractGuard
Clause scoring · Risk · Legal HITL
Horizon 2
Depends on:
HITL Framework · Salesforce REST API · Document AI · Gemini 1.5 Pro
Financial ART
Revenue & Risk
RevRec AI
ASC 606 · SHAP · Finance HITL · SAP write
Horizon 2
FinRisk Sentinel
Anomaly detection · Real-time alerts
Horizon 2
Depends on:
XAI Layer · HITL Framework · SAP integration · BigQuery
Operations ART
Asset & Sustainability
Asset IQ
RUL prediction · Anomaly · Maintenance HITL
Horizon 2
GreenOps Platform
Carbon-aware scheduling · ESG metrics
Horizon 3
Depends on:
Pub/Sub event fabric · Vertex AI Pipelines · Feature Store
Platform ART ★ Foundation
Data · Infra · Governance
Data Governance
Quality · Lineage · Schema validation
Horizon 1
Strategy Dashboard
C-suite unified view · BigQuery-backed
Horizon 3
Shared Enablers
HITL · XAI · Pub/Sub · VPC-SC · CMEK · IAM
Horizon 1
All other ARTs depend on Platform ART deliverables before they can build.
PI Cadence — Aligned to TOGAF Phase F Migration Horizons
PI-1
Foundation
PI-2
Compliance
PI-3
Core modules
PI-4
Asset + RevRec
PI-5
FinRisk + MLOps
PI-6
Sales Agent
PI-7
GreenOps
PI-8
Dashboard
PI-9
Cert. readiness
Horizon 1 — Foundation & Compliance (PI-1–2)
Horizon 2 — Core Modules (PI-3–5)
Horizon 3 — Full Suite (PI-6–9)
Cross-cutting Enablers

Four enablers every ART depends on — owned by Platform.

Cross-cutting concerns that span multiple ARTs must be explicitly owned and explicitly governed. In the AE Solution Train, these four enablers live in the Platform ART backlog and are released as shared capabilities before the dependent ARTs can build. This is what makes the Solution Train coherent rather than four independent teams building in parallel and discovering integration problems at the end.

Enabler 01
HITL Framework
The Firestore-backed state machine that every agent uses for human oversight checkpoints. Defines the state machine contract, the presentation interface, the decision record schema, and the timeout and escalation behaviour. No agent can implement a HITL checkpoint without this enabler being available.
All ARTs depend on this
Commercial ART Financial ART Operations ART
AR-02 · ADR-004 · EU AI Act Art. 14
Enabler 02
XAI Layer
The SHAP explanation pipeline that every ML model in the AE uses to produce human-readable feature attributions at inference time. Implements the explanation contract for each model, writes SHAP values to the BigQuery audit dataset before any downstream action executes, and generates the explanation object surfaced in the HITL UI.
Financial + Operations ARTs
Financial ART Operations ART Commercial ART
AR-01 · ADR-005 · EU AI Act Art. 13
Enabler 03
Data Fabric
The BigQuery data fabric, Pub/Sub event bus, and Vertex AI Feature Store that all modules share. Defines the canonical schema for the six shared entities (Contract, Transaction, Device, Asset Event, Agent Action, HITL Event), the event topic structure, and the feature store entity definitions. The shared data model that makes cross-module intelligence possible.
All ARTs depend on this
All ARTs All Modules
AR-11 · ADR-006 · Phase C Data Model
Enabler 04
Security & Compliance
VPC-SC perimeter, CMEK key management, IAM policy baseline, and Workload Identity Federation configuration. Provisioned via Terraform before any other ART begins building. No module deploys to production without the security baseline in place. Includes the Organisation Policy constraints that enforce data residency at the infrastructure layer.
All ARTs — pre-condition for build
Platform ART — H1
P-06 · P-11 · AR-06 · AR-07
Product Design Layer — Personas · User Journey · FRD · HITL Specification
Buyer Personas

Five personas. The same people as the stakeholders — described as users.

The stakeholder register on Page 02 captured who has sign-off authority. These persona cards capture who actually uses the AE day-to-day — their goals, frustrations, and what success looks like for them. Every user story in the FRD is written for one of these five people.

EA
The Architect
Enterprise Architect · IT & Digital Transformation · S-08
"I need to know that every design decision has a documented reason — and that I can trace any running component back to an architecture artifact."
Goals
Maintain a coherent architecture across Salesforce, SAP, and the AE without creating a third system of record
Ensure every GCP resource is provisioned via Terraform — no manual console state
Have a complete ADR index available for every architectural review
Frustrations
Architecture decisions made verbally in sprint planning meetings with no documentation
Integration point failures discovered at system test, not at design time
Compliance audits that require manual reconstruction of infrastructure state
Success with the AE
Every ADR is linked from the Architecture Explorer on the portfolio site — accessible in seconds during a review
terraform plan produces a complete infrastructure diff — no undocumented state
The TOGAF Phase D diagrams on Page 03 answer every integration question before it is asked
Architecture Explorer ADR Index Layer 04 Infra TOGAF Page 03
CO
The Compliance Officer
Chief Compliance Officer · Legal & Regulatory · S-02
"When an auditor asks me to show them the reasoning behind a revenue recognition decision made last March, I need that answer in under five minutes — not five days."
Goals
Bring all three production ML models into EU AI Act Annex III compliance before the Q2 2026 review
Demonstrate a documented human oversight mechanism for every high-risk AI decision
Produce a complete audit trail for any AI-informed decision on demand
Frustrations
Production ML models with no explanation capability — a legal liability the team has been avoiding addressing
Human review happening informally via email with no timestamped record
Compliance audits that require a week of manual evidence gathering
Success with the AE
A BigQuery query returns the complete SHAP explanation and HITL approval record for any inference in under 30 seconds
The EU AI Act compliance dashboard shows green status across all active models
The next regulatory review uses the AE audit trail as the evidence package — no manual reconstruction
XAI Explanation Viewer HITL Audit Dashboard RevRec AI ContractGuard
FC
The Finance Controller
Head of Revenue Accounting · Finance · reports to CFO (S-03)
"I'll approve the ML classification — but I need to see exactly which contract terms drove that decision before I let it post to the GL."
Goals
Review and approve every ASC 606 classification with full feature attribution before it posts to SAP
Reduce month-end close from 12 days by eliminating manual classification bottlenecks
Maintain a complete, immutable record of every revenue recognition decision and the human approval that preceded it
Frustrations
Manual classification of every MRI transaction — error-prone and time-consuming at quarter-end
No visibility into which contract terms are driving recognition decisions — the model is a black box
Revenue restatements caused by incorrect upfront classification discovered post-close
Success with the AE
Every classification arrives in the HITL queue with the top 5 contract features highlighted and a confidence score — the approval takes 90 seconds, not 90 minutes
Month-end close accelerated because classifications are done continuously, not in batch at period end
Override decisions are recorded with a mandatory reason code — creating a feedback dataset for model improvement
RevRec AI HITL Approval UI XAI Viewer FinRisk Sentinel
AE
The Account Executive
Senior Account Executive · Global Sales · reports to VP Sales (S-04)
"By the time our quote reaches the hospital's procurement committee, the competitor has already been evaluated. I need the first response to be same-day."
Goals
Receive a complete briefing document from the CCAI agent before entering any commercial conversation — qualification done, configuration validated, pricing estimated
Close the time-to-first-response gap that currently costs deals to competitors with faster processes
Know exactly when the agent has reached its autonomy boundary and why it is handing off
Frustrations
Spending the first three calls on qualification and configuration questions that a well-designed system could handle automatically
No visibility into where an inbound inquiry is in the qualification pipeline until it lands in their Salesforce queue
CPQ configurations that require Applications Engineering review before pricing — a 5-day delay that kills deal momentum
Success with the AE
The escalation notification from the CCAI agent includes a complete deal brief — hospital profile, clinical requirements, suggested configuration, estimated price range, and the conversation transcript
Configuration validation is done by the agent before escalation — the AE enters the conversation knowing the BOM is clean
The Salesforce Opportunity is already created and staged correctly when the AE first touches it
CCAI Sales Agent Salesforce Integration ContractGuard
FS
The Field Service Manager
Regional Field Service Manager · Operations · reports to VP Field Service (S-06)
"A failed MRI scanner in a hospital is a patient care emergency and a €180K dispatch. I need 72 hours of warning — not a phone call at 2am."
Goals
Receive predictive maintenance alerts with enough lead time to schedule planned interventions — not emergency dispatches
Understand which sensor features are driving a failure prediction before committing a field engineer to a site visit
Have a unified view of all units in their region — not six different regional system logins
Frustrations
Reactive maintenance that costs 3.2× more than planned interventions — and disrupts hospital operations
No cross-regional visibility — a failure pattern appearing in EMEA-North units has already appeared in APAC units but the two systems never talk
Warranty reserve that covers worst-case scenarios because failure probability cannot be modelled
Success with the AE
Asset IQ surfaces a RUL alert for a Munich hospital unit 96 hours before the predicted failure window — with the top 3 sensor features and a confidence score — enough time to schedule a planned intervention
A pattern detected across 14 units in three regions is surfaced as a fleet-level anomaly in the Asset IQ dashboard before it becomes a recall conversation
Every maintenance work order created by the AE has a SHAP explanation attached — the Field Engineer knows why they are there before they arrive
Asset IQ HITL Approval UI XAI Viewer Strategy Dashboard
User Journey

One deal. Five personas. End to end.

An MRI deal for ClaraVis AG — from first hospital inquiry to revenue posted in SAP. Five journey stages. Every persona's touchpoint at each stage. The AE module handling it. The HITL checkpoint where human judgment is required.

End-to-End Journey — MRI-7T Sale to University Hospital Munich
Five stages · Five personas · AE module at each touchpoint · HITL checkpoints marked
JOURNEY STAGES 01 · Inquiry & Qualification 02 · Config & CPQ 03 · Contract & Legal 04 · Revenue Recognition 05 · Field & Post-Sale PERSONA TOUCHPOINTS Account Executive enters at stage 01→ escalation Receives briefing from CCAI Agent after turn 11 escalation Reviews validated CPQ configuration Agent-prepared Commercial terms negotiation — human judgment required General Counsel HITL at stage 03 HITL Checkpoint Reviews flagged clauses Approves / escalates HITL Finance Controller HITL at stage 04 HITL Checkpoint Reviews ASC 606 class. + SHAP explanation HITL Field Service Manager engaged at stage 05 Receives asset onboarding RUL baseline established DHR record created AE MODULE AT EACH STAGE CCAI Sales Agent Config Agent + CPQ ContractGuard RevRec AI Asset IQ + Dashboard Immutable audit trail written to Firestore + BigQuery at every stage transition — full journey queryable on demand EMOTION ARC 😐 Waiting 😊 Fast response 😐 Reviewing 😊 Confident 😊 Proactive
Functional Requirements

Eight user stories. Each traceable to architecture.

Every user story maps to a Page 02 requirement, a Page 03 ADR, an EU AI Act obligation, and a HITL or XAI specification. These are the handover documents from the architecture engagement to the development teams — precise enough to build from, not so prescriptive they constrain implementation choices.

FRD-01
CCAI Sales Agent
As an Account Executive, I need the CCAI agent to qualify, configure, and price an inbound MRI inquiry autonomously through the first eleven conversation turns, so that when the deal escalates to me it comes with a complete briefing document and a validated CPQ configuration.
Acceptance Criteria
Agent handles qualification (budget, authority, need, timeline) without human intervention for turns 1–8
BOM validation runs against the product catalogue before any pricing estimate is given
Escalation triggers a Salesforce Opportunity creation and a briefing document generation in parallel
The briefing document contains: hospital profile, clinical requirements, suggested SKUs, estimated price range, and full conversation transcript
Escalation state transition is logged immutably in Firestore before the AE notification is sent
Source Req.BR-07 · BR-01
ADR ReferenceADR-001 (SFDC)
EU AI ActLimited risk · Art. 52
ARTCommercial ART · H3
HITL: Turn 11 escalation is a designed HITL state transition. Agent pauses, generates briefing document, notifies AE. No further autonomous action until AE confirms engagement.
FRD-02
ContractGuard
As the General Counsel, I need ContractGuard to analyse every inbound contract at clause level, score non-standard terms against a risk model, and present flagged clauses with precedent references before I am asked to review, so that my review time is spent on judgment — not extraction.
Acceptance Criteria
Full contract ingested via GCS → Document AI pipeline within 30 minutes of upload
Every clause classified against 200+ clause type taxonomy
Non-standard clauses (risk score above threshold) surfaced in HITL queue with: clause text, risk score, top 3 precedent contracts, draft counter-position
HITL queue shows approve / request revision / escalate to external counsel — each with mandatory reason code
Complete clause-level analysis and HITL decision record written to audit log before any counter-proposal is drafted
Source Req.BR-05 · AR-02
ADR ReferenceADR-005 (SHAP)
EU AI ActHigh risk · Annex III
ARTCommercial ART · H2
HITL: Every clause with risk score above configured threshold routes to Legal HITL state. Agent waits. 24-hour timeout triggers escalation to General Counsel's manager. No counter-proposal generated without HITL approval on record.
XAI: Risk score explanation shows top features driving the classification: clause length deviation, liability cap ratio vs contract value, governing law mismatch, indemnification asymmetry.
FRD-03
RevRec AI
As the Finance Controller, I need every MRI transaction to be classified under ASC 606 by the ML model with a full SHAP explanation, and to route through my approval queue before posting to SAP, so that every GL entry has both an ML basis and a documented human approval.
Acceptance Criteria
Classification produced within 5 minutes of Salesforce contract signed event via Pub/Sub trigger
SHAP explanation identifies top 5 contract features with directional effect on the classification
HITL queue presents: classification result, confidence score, SHAP chart, similar historical transactions, and one-click approve / override with reason code
SAP GL write executes only after HITL approval record is committed to Firestore
Performance obligation tags written to Transaction entity at classification time — not retrospectively at period end
Source Req.BR-04 · AR-08
ADR ReferenceADR-005 · ADR-006
EU AI ActHigh risk · Annex III
ARTFinancial ART · H2
HITL: All classifications route to Finance Controller HITL — no exceptions. 4-hour SLA. Timeout escalates to CFO. Override creates a labelled training example for the next model version.
XAI: SHAP values computed at inference time using TreeExplainer or LinearExplainer per model type. Written to BigQuery shap_explanations table with transaction_id FK before any downstream action.
FRD-04
Asset IQ
As the Field Service Manager, I need Asset IQ to predict unit failures with enough lead time to schedule planned interventions, and to explain which sensor readings drove the prediction, so that I can make an informed dispatch decision rather than reacting to failures.
Acceptance Criteria
All 6 regional telemetry systems publish to a unified Pub/Sub topic with validated common schema within Horizon 1
RUL model produces a prediction and confidence score for every active unit on a configurable cadence (default: daily)
Predictions below confidence threshold route to FSM HITL queue — agent does not create work orders below threshold without human confirmation
SHAP explanation identifies top 3 sensor features driving the RUL prediction for every alert
Fleet-level anomaly detection surfaces cross-regional patterns — not just unit-level signals
Source Req.BR-03 · AR-11
ADR ReferenceADR-006 (Pub/Sub)
EU AI ActHigh risk · Annex III
ARTOperations ART · H2
HITL: Work orders above confidence threshold created autonomously. Below threshold: FSM HITL queue with prediction, confidence score, and SHAP sensor attribution. FSM approves, rejects, or requests on-site verification.
XAI: SHAP values computed over sensor time-series features. Top features presented as: feature name, current value, baseline value, directional contribution to RUL reduction.
FRD-05
FinRisk Sentinel
As the Finance Controller, I need FinRisk Sentinel to monitor the financial event stream in real time and surface anomalies — unusual payment patterns, revenue posting discrepancies, warranty reserve movements — with context before they compound into material issues.
Acceptance Criteria
Streaming anomaly detection operates on BigQuery financial event stream with sub-5-minute latency
Every anomaly alert includes: event type, magnitude, Z-score vs 90-day baseline, affected entity, and recommended action
High-severity anomalies (above configured threshold) route to CFO + Finance Controller HITL simultaneously
SHAP explanation available for every anomaly score above the alert threshold
False positive feedback from HITL decisions feeds back into the anomaly detection model baseline
Source Req.BR-01 · AR-01
ADR ReferenceADR-005 · ADR-006
EU AI ActHigh risk · Annex III
ARTFinancial ART · H2
HITL: High-severity anomalies pause automatic escalation and route to Finance HITL. Controller can acknowledge, investigate, or escalate to CFO. All decisions logged immutably.
FRD-06
GreenOps Platform
As the CTO, I need GreenOps to schedule compute-intensive AE workloads to align with low-carbon electricity grid windows and produce auditable ESG metrics for EU CSRD reporting, so that the AE platform itself contributes to ClaraVis's sustainability commitments.
Acceptance Criteria
Carbon intensity data from GCP's Carbon Footprint API feeds scheduling decisions for batch ML training jobs
Carbon savings per workload calculated and written to ESG metrics dataset in BigQuery
Monthly ESG report generated automatically — Scope 3 emissions for cloud operations, carbon savings from scheduling, and year-on-year trend
All metrics tagged with the GCP resource label taxonomy for FinOps and ESG cross-referencing
Source Req.BR-08 (CTO)
ADR ReferenceP-12 (FinOps)
EU AI ActMinimal risk
ARTOperations ART · H3
FRD-07
Data Governance
As the Enterprise Architect, I need every data record entering the AE fabric to be validated against the canonical schema, lineage-tagged with its source system, and quality-scored before it reaches any ML model, so that model predictions are never based on undocumented or unvalidated data.
Acceptance Criteria
Schema validation runs on every Pub/Sub message before it is written to BigQuery — malformed records are quarantined, not dropped
Every record carries a lineage tag: source system, ingestion timestamp, schema version, and quality score
Quality score below configured threshold triggers a data steward alert — records below threshold are excluded from ML feature pipelines until reviewed
Data lineage is queryable via BigQuery — trace any feature value back to its source event
Source Req.AR-11 · AR-12
ADR ReferenceADR-006
EU AI ActMinimal risk
ARTPlatform ART · H1
FRD-08
Strategy Dashboard
As the CTO and CFO, I need a single real-time dashboard that unifies pipeline health, fleet status, revenue recognition posture, and EU AI Act compliance status, so that the executive team can make informed decisions without pulling data from four separate systems.
Acceptance Criteria
Dashboard powered by a single BigQuery dataset that aggregates from all 8 AE modules — no module-specific logins required
EU AI Act compliance status shows green / amber / red per model — based on HITL checkpoint completion rate and SHAP explanation coverage
Pipeline health panel pulls directly from Salesforce via the AE Pub/Sub integration — reflects real-time Opportunity stage distribution
Fleet status panel shows RUL distribution across all active units — colour-coded by risk tier
Dashboard data refreshes on a configurable cadence — default 15 minutes for operational panels, daily for financial panels
Source Req.BR-08 · BR-02
ADR ReferenceADR-006
EU AI ActMinimal risk
ARTPlatform ART · H3
HITL Specification

Eleven checkpoints. Every one specified.

The complete HITL specification for the AE — the artifact that satisfies EU AI Act Article 14 documentation requirements. Every checkpoint with its trigger condition, the agent action that precedes it, what the human reviewer sees, their decision options, the SLA, and the audit record format. This table is the contract between the architecture and the EU AI Act compliance team.

ID Module Trigger Condition What Human Sees Decision Options SLA Timeout Action
HITL-01 CCAI Sales Agent Turn 11 reached OR commercial terms entered Deal brief: hospital profile, clinical requirements, validated configuration, estimated price range, conversation transcript
Engage deal Return to agent
4 hours Escalate to VP Sales
HITL-02 ContractGuard Clause risk score above Legal threshold (configurable) Clause text, risk score, top 3 similar precedent contracts, draft counter-position, SHAP feature attribution
Approve as-is Request revision External counsel
24 hours Escalate to GC's manager
HITL-03 ContractGuard Governing law non-standard for ClaraVis jurisdiction Governing law clause, jurisdiction risk summary, ClaraVis standard terms comparison
Accept Counter-propose Legal review
48 hours Pause contract progression
HITL-04 RevRec AI All ASC 606 classifications — no threshold exception Classification result, confidence score, SHAP chart (top 5 features), 3 similar historical transactions, one-click approve or override
Approve → SAP Override + reason
4 hours Escalate to CFO
HITL-05 RevRec AI Multi-element arrangement detected — split required Proposed performance obligation split, ASC 606 rule applied, SSP references, contract line items
Approve split Manual split
8 hours Escalate to CFO
HITL-06 Asset IQ RUL prediction confidence below configured threshold Unit ID, predicted failure window, confidence score, top 3 SHAP sensor features, current sensor readings vs baseline
Schedule maintenance Dismiss with reason On-site verify
8 hours Auto-schedule preventive
HITL-07 Asset IQ Fleet-level anomaly detected (cross-regional pattern) Affected units, pattern description, region distribution, severity score, recommended fleet action
Fleet alert Isolated incidents Recall review
2 hours Auto-escalate to VP Field
HITL-08 FinRisk Sentinel Anomaly score above high-severity threshold Event type, magnitude, Z-score vs 90-day baseline, affected entity, SHAP explanation, recommended action
Acknowledge + act False positive CFO escalation
1 hour Auto-escalate CFO + audit
HITL-09 RevRec AI Model confidence below minimum threshold (any classification) Transaction detail, model confidence score, reason for low confidence, request for manual classification
Manual classify Senior review
4 hours Hold transaction, alert CFO
HITL-10 ML Platform Drift detected above threshold — retraining triggered Drift metric, baseline vs current distribution, proposed retraining scope, estimated timeline, Model Card diff
Approve retrain Hold and investigate ML Engineer review
24 hours Hold model in production
HITL-11 ML Platform New model version ready for production promotion Model Card diff (previous vs new), evaluation metrics comparison, bias analysis results, SHAP baseline comparison
Promote to prod Return to staging
48 hours Model stays in staging
EU AI Act Article 14 — Human Oversight Compliance Statement
This HITL specification satisfies EU AI Act Article 14 by defining: (1) the specific conditions under which human oversight is triggered for each high-risk AI system, (2) the information presented to the human reviewer at each checkpoint, (3) the decision options available and the action each triggers, (4) the SLA and escalation path, and (5) the immutable audit record format written to Firestore before any agent proceeds. All eleven checkpoints are implemented as first-class state machine nodes — not process notes or informal review steps.
Next in the Portfolio
Product design complete.
Agent design follows.

The FRD and HITL specification on this page are the inputs to the Agent Swarm Architecture. Page 05 takes every HITL checkpoint above and expresses it as a formal state machine node in the ADK agent definition — the technical design that implements what was specified here.

PG 05
Agent Swarm Architecture
ADK · A2A · MCP · State Machines · Tool Manifests · Guardrails
In Design
PG 03
← TOGAF ADM — Phases A through F
The architecture that this delivery model implements