The Autonomous Seller / Page 08

Adoption
Architecture
— TOGAF Phase F & G.

The architecture is complete across Pages 01–07. This page closes the TOGAF sequence: Phase F (Migration Planning) and Phase G (Implementation Governance) expressed as adoption argument, stakeholder map, phased delivery rationale, organisational risk mitigations, and ADR-016. Every claim traces back to an artifact already designed on a prior page.

TOGAF Phase F · Phase G 5 Stakeholder Personas 3 Delivery Horizons 4 Adoption Risks ADR-016
Adoption Rationale

Enterprise AI fails at the organisational boundary — not the technical one.

The architecture on Pages 01–07 is sound. The more important question is whether ClaraVis can actually adopt it — across nine stakeholders, three regulatory obligations, two legacy systems, and a Finance team that has been doing manual ASC 606 classification for fifteen years. The AS is designed to answer that question architecturally. S-01 (CTO) and S-09 (CISO) are structural stakeholders: their adoption requirements — architecture coherence, security posture, data sovereignty — are satisfied through the design itself on Pages 03, 06, and 07, not through user-facing module features.

Enterprise AI adoption fails when it asks too much of the organisation at once — when the compliance posture, the change management load, and the integration complexity all land simultaneously. The AS avoids this by design: each horizon delivers standalone value before the next begins. ClaraVis does not need to commit to all eight modules. It needs to commit to three — and see the results — before the next three become an easy decision rather than a leap of faith.
Derived from: TOGAF Phase F Migration Horizons (Page 03) · SAFe Solution Train PI cadence (Page 04) · Requirements C-05 (Page 02)
Why phased adoption works here
The EU AI Act deadline is a forcing function
ClaraVis has three production ML models with no EU AI Act compliance posture and a regulatory review in Q2 2026. Horizon 1 — the HITL framework, XAI layer, and Model Cards for those three models — delivers compliance before the review. That single outcome justifies the entire Horizon 1 investment without any discussion of the broader AS vision. Horizons 2 and 3 follow because Horizon 1 built the platform they run on.
Why it doesn't disrupt existing systems
Salesforce and SAP stay exactly where they are
ADR-002 (Page 03) is the adoption argument in architectural form: the AS augments existing systems — it never replaces them. Salesforce remains the system of record. SAP remains the ERP. The AS is the orchestration and intelligence layer that sits above them. ClaraVis's existing Salesforce and SAP implementations are not touched, re-platformed, or migrated. The adoption risk that kills most enterprise AI programmes — the system migration — does not exist here.
Why trust is built in
Every consequential decision has a human in the loop
The eleven HITL checkpoints on Page 04 are not a compliance feature — they are an adoption feature. The Finance Controller who has been classifying ASC 606 transactions manually for fifteen years will not trust an ML model on day one. The HITL framework gives them the control to verify the model's reasoning before it posts to SAP. Trust is earned checkpoint by checkpoint, tracked via HITL override rate in Vertex AI Monitoring. The architecture is designed for that journey, not the destination.
Stakeholder Map

Five personas. One table. Every column already designed.

A synthesis view connecting the stakeholder register on Page 02, the persona cards on Page 04, and the migration horizons from Page 03. Every stakeholder has a specific pain addressed by a specific module delivered in a specific horizon — with a measurable outcome that maps to the acceptance criteria established in Phase B.

Persona Primary pain (Page 02) AS module that addresses it Horizon Measurable outcome
The Compliance Officer CCO · S-02 Page 04 · Persona 02 3 production ML models with no EU AI Act compliance posture. Q2 2026 regulatory review approaching with no audit trail. HITL Framework · XAI Layer · Model Cards for 3 existing models. SHAP explanation per inference, HITL-04/06/08 checkpoints, immutable Firestore audit trail queryable on demand. H1 Audit trail query returns in <30s. HITL records cover 100% of model inferences from H1 go-live. Model Cards complete for all 3 models before Q2 2026 review.
The Enterprise Architect EA · S-08 Page 04 · Persona 01 Architecture decisions made informally. No ADR index. Infrastructure state partially manual. Integration failures discovered at system test. Terraform IaC · ADR Index · Architecture Explorer. Every resource in code. Every decision documented. Full environment reproducible from state file in <45 minutes. H1 terraform plan produces complete infrastructure diff before any change reaches production. ADR-001 through ADR-016 queryable in architecture review. Zero console-provisioned resources in production.
The Finance Controller Head of Revenue Acctg · S-03 Page 04 · Persona 03 Manual ASC 606 classification. 12-day month-end close. Revenue restatements at quarter-end averaging €18K per misclassification. RevRec AI · Finance HITL (HITL-04/05) · SAP integration. ML classifies every transaction with SHAP explanation and comparable transactions. Finance Controller approves before SAP posts — no exceptions. H2 Month-end close reduces from 12 days to ≤9 days (BR-01 acceptance criterion). HITL override rate <15% by month 3 of H2 — the primary trust-building metric. Zero SAP posts without a committed HITL approval record.
The Field Service Manager Regional FSM · S-06 Page 04 · Persona 05 Reactive maintenance. 6 disconnected regional asset systems. No cross-regional pattern visibility. €40M warranty over-reserve driven by unplanned failures. Asset IQ · Unified telemetry pipeline · Fleet anomaly detection (HITL-06/07). RUL predictions with SHAP sensor attribution. Cross-regional fleet anomaly alerts with 72+ hour lead time. H2 ≥72 hour advance notice on predicted failures (BR-04 acceptance criterion). Unplanned field service events reduce by ≥30% within 6 months of H2 go-live. Fleet anomaly patterns surfaced across all 6 regional systems in a single HITL-07 alert.
The Account Executive Senior AE · S-04 Page 04 · Persona 04 3–5 day time-to-qualified-AE. Manual qualification and configuration consuming AE time that should be spent on commercial terms and relationship. CCAI Sales Agent · Salesforce integration · ContractGuard. Agent handles first 11 qualification turns. Validated config, BOM, and briefing doc complete before AE engages. H3 Time-to-qualified-AE reduces from 3–5 days to ≤1 day (BR-02 acceptance criterion). AE enters every conversation with Opportunity created, BOM validated, and briefing doc generated — measured via Salesforce Activity log per Opportunity.
Value by Horizon

Each horizon delivers before the next begins.

The three horizons from TOGAF Phase F (Page 03) restated through an adoption lens. Each card answers: what is delivered, who feels it first, the architectural dependency that makes this sequencing non-negotiable, and what makes the next horizon an easier approval than the current one.

Horizon 1
Months 1–3
Foundation & Compliance
Delivered
GCP infrastructure — Terraform, VPC-SC, IAM, CMEK, all security controls live
HITL framework — state machine + Firestore audit store deployed and tested
XAI layer — SHAP integrated on all 3 existing production ML models
Model Cards — Article 11 documentation complete for all 3 models
Unified asset telemetry pipeline — 6 regional systems on one schema
Salesforce Developer Edition — integration live and tested
Who feels it first CCO (S-02) — compliance posture established before Q2 2026 review. Enterprise Architect (S-08) — entire infrastructure in code for the first time.
Why H1 must come first: The Vertex AI Feature Store, VPC-SC perimeter, and Pub/Sub event bus provisioned in H1 are architectural prerequisites for every H2 module. ContractGuard and RevRec AI cannot be deployed onto infrastructure that does not exist — and cannot be trusted by Finance until the HITL framework has been running and accumulating audit records in production. H1 is not a setup cost. It is the proof-of-platform that makes H2 a feature deployment, not a system build.
Horizon 2
Months 4–8
Core Module Deployment
Delivered
ContractGuard — clause scoring + Legal HITL operational
RevRec AI — ASC 606 classification + Finance Controller HITL + SAP write
Asset IQ — RUL model + fleet anomaly detection in production
FinRisk Sentinel — streaming anomaly monitoring live
Vertex AI MLOps — pipelines, drift detection, Model Registry for all new models
Module dashboards — UI for all four deployed modules
Who feels it first Finance Controller — first month-end close with ML-assisted classification, SHAP context, and 90-second HITL approval per transaction. Field Service Manager — first predictive maintenance alert with sensor attribution and cross-regional fleet view.
Why H2 must follow H1: RevRec AI and ContractGuard depend on the Vertex AI Feature Store (H1 deliverable) for online feature serving. Their HITL checkpoints depend on the HITL framework (H1 deliverable) being proven in production — Finance and Legal will not trust a HITL workflow that went live the same day as the model. The HITL override rate data accumulated in H1 is the evidence base that determines H2's confidence thresholds. H2 modules are features on a proven platform, not a new system.
Horizon 3
Months 9–18
Full AS Suite & Optimisation
Delivered
CCAI Sales Agent — full ADK deployment, Salesforce integration, AE escalation flow
GreenOps Platform — carbon-aware scheduling, ESG metrics, CSRD reporting
Strategy Dashboard — C-suite unified view across all 8 modules
Data Governance module — quality, lineage, schema validation
Cross-module optimisation — shared feature pipelines, joint drift monitoring
EU AI Act full compliance certification readiness
Who feels it first Account Executive — first deal where the Sales Agent handled qualification and the AE entered with a complete briefing, BOM validated, and Opportunity already created. CTO (S-01) — Strategy Dashboard provides the unified cross-module view that was previously 4 manual data pulls.
Why H3 is a lower-risk approval than H1: By month 9, Finance and Legal have experienced the HITL workflow for 5+ months. The HITL override rate metric shows the organisation's trust trajectory. H3 modules (Sales Agent, GreenOps) are lower-risk because the organisation has internalised the pattern — agents present their reasoning, humans approve consequential actions, the audit trail is immutable. The AS is no longer a proposition. It is an established practice.
Adoption Risks

Four risks. Each with an architectural mitigation.

These are the risks that cause enterprise AI programmes to stall or fail — not the technical risks (those are addressed in the architecture on Pages 03–07) but the organisational and process risks that no diagram can eliminate on its own. Each one has a specific architectural design decision that reduces it.

Risk 01 — Change Management
The Finance team has been doing this manually for fifteen years
The Finance Controller and their team have fifteen years of muscle memory around manual ASC 606 classification. An AI system that replaces their judgment without warning will be rejected — not because the model is wrong, but because it bypasses the trust-building process entirely. This is the most common failure mode in enterprise AI adoption and it is never solved by better accuracy metrics.
Architectural mitigation
The HITL framework (Page 04, §07) is the change management strategy encoded in architecture. RevRec AI never posts to SAP without Finance Controller approval — not in Horizon 2, not in Horizon 3, not ever. The model earns trust by presenting its reasoning (SHAP explanation, comparable transactions, confidence score) and waiting for human confirmation. The HITL override rate metric (tracked in Vertex AI Monitoring) is the adoption progress indicator — as overrides decline, trust is growing. The transition from human-in-the-loop to human-on-the-loop happens at the Finance team's pace, not the project plan's.
HITL-04 · Page 04 · Vertex AI Monitoring · Drift Detection Page 06
Risk 02 — Data Quality
Six regional asset systems with no common schema and unknown data quality
The Asset IQ module depends on telemetry from 6 regional systems that have never been unified. The assumption that these systems produce clean, consistent, complete data is almost certainly wrong. A model trained on clean demo data that fails on live production telemetry destroys trust faster than no model at all — and in a regulated medical device context, a false negative on a failure prediction has patient safety implications.
Architectural mitigation
The Horizon 1 data fabric deliverable (Page 03, Phase F) is specifically sequenced to address this before any ML model touches production data. The Pub/Sub ingestion pipeline with schema validation, the Data Governance module with quarantine logic, and the Feature Store lineage tags are all Horizon 1 — not Horizon 2. The Asset IQ RUL model is not deployed until the Vertex AI Feature Store can demonstrate that every feature value has a validated source event behind it. The data quality score threshold for inclusion in ML pipelines (FRD-07, Page 04) must be met before training begins — not after deployment reveals the problem.
FRD-07 · Page 04 · Data Governance · Feature Store lineage · Page 06
Risk 03 — Regulatory Approval Timeline
EU AI Act Annex III compliance cannot be certified on the day of the review
EU AI Act compliance is not a box to tick before a regulatory review — it is a documented posture that must be demonstrable at any point in time. A compliance dashboard that went green two days before the Q2 2026 review will not satisfy an auditor. The audit trail, the HITL records, the Model Cards, and the SHAP explanations need to have been accumulating for long enough to be credible as a systematic practice, not an emergency preparation.
Architectural mitigation
The Horizon 1 sequencing places the compliance infrastructure — HITL framework, XAI layer, Model Cards — at the start of the programme, not at the end. By the time the Q2 2026 review arrives, the HITL audit trail will have months of immutable records in Firestore and BigQuery, the SHAP explanations will have been running on every production inference, and the Model Cards will have been through at least one HITL-11 promotion review. The compliance posture is established through consistent architectural practice — not through documentation written the week before the review.
HITL-11 · Model Cards · Page 06 · EU AI Act Art. 9 · Horizon 1 sequencing · Page 03
Risk 04 — Integration Complexity
Salesforce and SAP integrations are where enterprise AI projects go to die
The AS touches two of the most complex enterprise systems in existence — Salesforce CPQ and SAP S/4HANA. Integration projects involving these systems are notorious for scope creep, timeline overruns, and late-discovered data model mismatches. A real-time ML inference layer that depends on both systems for input data and writes back to both for outputs has a high surface area for integration failure.
Architectural mitigation
Three design decisions directly address this. First, ADR-001 (Salesforce Developer Edition REST API) means the Salesforce integration is validated in a free, permanent sandbox before any production credentials are involved — integration failures are discovered in dev, not in front of the CCO. Second, the SAP write for RevRec AI is the only irreversible AS action, and it is protected by a mandatory HITL approval record ID as a parameter — the integration cannot be called without a documented human decision preceding it. Third, the Pub/Sub event fabric (ADR-006) decouples the AS from both systems — the AS reads events from topics, not from direct system APIs, which means a Salesforce or SAP outage does not cascade into AS failures.
ADR-001 · ADR-006 · Page 03 · SAP write guard · Page 05
Architecture Decision Record

One adoption decision worth documenting.

Only one adoption decision rises to the level of an ADR — the phased delivery approach itself. Every other commercial decision (pricing, packaging, sales motion) belongs to ClaraVis's commercial team, not the architecture record.

ADR-016 · Phase GTM Design
Phased adoption over big-bang full-suite deployment
Decision
Deploy the AS in three independent horizons, each delivering standalone value before the next begins. H1 delivers the compliance infrastructure and data foundation. H2 delivers the four core business modules on that proven foundation. H3 delivers the full suite on the established platform. The phased approach is the only viable deployment pattern given the constraints: EU AI Act Q2 2026 deadline, limited internal change management capacity, two complex legacy system integrations, and nine stakeholders with different adoption readiness levels.
Alternatives Rejected
Full AS suite deployed as a single programme: rejected for three reasons. (1) The EU AI Act Q2 2026 deadline cannot wait for the full programme to complete — the compliance infrastructure must be live well before the review, not at the same time as all eight modules. (2) Change management load across all nine stakeholders simultaneously is organisationally untenable — Finance, Legal, Field Service, and Sales cannot all be onboarded to new AI-assisted workflows in the same sprint. (3) If the programme stalls at any point, there is no partial value delivered — the entire investment is at risk. A single-programme failure is binary. A phased programme failure after H1 delivers a compliant, infrastructure-complete organisation regardless.
Consequences
The phased approach requires that the Horizon 1 infrastructure is designed to support H2 and H3 modules from the start — not retrofitted later. This is why the Terraform modules, VPC-SC perimeter, Vertex AI Feature Store, and Pub/Sub event bus are H1 deliverables even though they are not user-visible. The architecture absorbs the upfront design cost in exchange for eliminating the integration risk that big-bang deployments accumulate at the end. Every subsequent horizon is an extension of a proven platform, not a new system. The consequence of phasing is also a constraint on H1 scope: every H1 architectural decision must be evaluated against its ability to support H2 and H3 without redesign — this is the origin of ADR-002 (augment, never replace) and the shared Pub/Sub event fabric in ADR-006.
Accepted · Phase GTM Design
Portfolio Status
Adoption Architecture complete.
Eight pages. One system.

Pages 01–08 constitute a complete enterprise architecture portfolio: strategy and stakeholder analysis, TOGAF phases A–G, agent swarm design, ML engineering, infrastructure as code, and adoption architecture. Each page is a standalone artifact and a linked node in a traceable architecture record.

PG 07
← Infrastructure & GCP Architecture
The platform this adoption strategy runs on
INDEX
← Portfolio Overview
Return to the full portfolio index