The Autonomous Enterprise / Page 08

Go-to-Market
Strategy
— architecturally grounded.

The architecture is complete across Pages 01–07. This page answers the one remaining question: how does it reach the business? Not a sales playbook — an adoption strategy with architectural foundations. Every claim on this page traces back to an artifact already designed.

5 Buyer Personas 3 Adoption Horizons 4 Risk Mitigations ADR-016
Scope boundary — this page
Go-to-market strategy is supporting context in an architecture portfolio — not the core deliverable. This page demonstrates that the architecture was designed with adoption in mind, not in isolation from it. What this page does not contain: pricing models, competitive analysis, sales playbooks, market sizing, or revenue projections. Those are commercial decisions that belong to ClaraVis's commercial team. What this page does contain: the adoption argument, the buyer map, the value-by-horizon view, and the adoption risks with architectural mitigations. Each of these is a natural extension of the architecture work already documented on Pages 02–07.
The Adoption Argument

Enterprise AI fails at the organisational boundary — not the technical one.

The architecture on Pages 01–07 is sound. The more important question is whether ClaraVis can actually adopt it — across nine stakeholders, three regulatory obligations, two legacy systems, and a Finance team that has been doing manual ASC 606 classification for fifteen years. The AE is designed to answer that question architecturally, not commercially.

Enterprise AI adoption fails when it asks too much of the organisation at once — when the compliance posture, the change management load, and the integration complexity all land simultaneously. The AE avoids this by design: each horizon delivers standalone value before the next begins. ClaraVis does not need to commit to all eight modules. It needs to commit to three — and see the results — before the next three become an easy decision rather than a leap of faith.
Derived from: TOGAF Phase F Migration Horizons (Page 03) · SAFe Solution Train PI cadence (Page 04) · Requirements C-05 (Page 02)
Why phased adoption works here
The EU AI Act deadline is a forcing function
ClaraVis has three production ML models with no EU AI Act compliance posture and a regulatory review in Q2 2026. Horizon 1 — the HITL framework, XAI layer, and Model Cards for those three models — delivers compliance before the review. That single outcome justifies the entire Horizon 1 investment without any discussion of the broader AE vision. Horizons 2 and 3 follow because Horizon 1 built the platform they run on.
Why it doesn't disrupt existing systems
Salesforce and SAP stay exactly where they are
ADR-002 (Page 03) is the adoption argument in architectural form: the AE augments existing systems — it never replaces them. Salesforce remains the system of record. SAP remains the ERP. The AE is the orchestration and intelligence layer that sits above them. ClaraVis's existing Salesforce and SAP implementations are not touched, re-platformed, or migrated. The adoption risk that kills most enterprise AI programmes — the system migration — does not exist here.
Why trust is built in
Every consequential decision has a human in the loop
The eleven HITL checkpoints on Page 04 are not a compliance feature — they are an adoption feature. The Finance Controller who has been classifying ASC 606 transactions manually for fifteen years will not trust an ML model on day one. The HITL framework gives them the control to verify the model's reasoning before it posts to SAP. Trust is earned checkpoint by checkpoint. The architecture is designed for that journey, not for the destination.
Buyer Map

Five personas. One table. Every column already designed.

This is not new content — it is a synthesis view connecting the stakeholder register on Page 02, the persona cards on Page 04, and the migration horizons from Page 03. The purpose is to make visible that every buyer has a specific pain addressed by a specific module delivered in a specific horizon.

Persona Primary pain (Page 02) AE module that addresses it Horizon Observable outcome
The Compliance Officer CCO · S-02 Page 04 · Persona 02 3 production ML models with no EU AI Act compliance posture. Q2 2026 regulatory review approaching. HITL Framework · XAI Layer · Model Cards for 3 existing models. SHAP explanation per inference, HITL-04/06/08 checkpoints, immutable audit trail queryable on demand. H1 Compliance dashboard green before Q2 2026 review. Audit trail queryable in under 30 seconds.
The Enterprise Architect EA · S-08 Page 04 · Persona 01 Architecture decisions made informally. No ADR index. Infrastructure state partially manual. Integration failures discovered at system test. Terraform IaC · ADR Index · Architecture Explorer. Every resource in code. Every decision documented. Full environment reproducible from state file. H1 terraform plan produces complete infrastructure diff. ADR index accessible during any architecture review.
The Finance Controller Head of Revenue Acctg · S-03 Page 04 · Persona 03 Manual ASC 606 classification. 12-day month-end close. No ML-assisted recognition. Revenue restatements at quarter-end. RevRec AI · Finance HITL (HITL-04/05) · SAP integration. ML classifies every transaction with SHAP explanation. Finance Controller approves before SAP posts. H2 Classifications arrive continuously — not in month-end batch. Each approval takes 90 seconds with full SHAP context.
The Field Service Manager Regional FSM · S-06 Page 04 · Persona 05 Reactive maintenance. 6 disconnected regional asset systems. No cross-regional pattern visibility. €40M warranty over-reserve. Asset IQ · Unified telemetry pipeline · Fleet anomaly detection (HITL-06/07). RUL predictions with SHAP sensor attribution. Cross-regional fleet anomaly alerts. H2 Predictive alerts with 72+ hour lead time. Fleet-level patterns surfaced before they become recall conversations.
The Account Executive Senior AE · S-04 Page 04 · Persona 04 3–5 day time-to-qualified-AE. Manual qualification and configuration. CPQ delays killing deal momentum. CCAI Sales Agent · Salesforce integration · ContractGuard. Agent handles first 11 turns. Validated config and briefing doc prepared before AE engages. H3 AE enters every conversation with qualification done, BOM validated, and Opportunity already created in Salesforce.
Note on the CTO and CISO: S-01 (CTO) and S-09 (CISO) are sponsors and approvers, not daily users. Their adoption requirements — architecture coherence, security posture, data sovereignty — are addressed structurally in Pages 03, 06, and 07. They do not appear in the buyer map because the AE is adopted for them by addressing their requirements in the design, not by giving them a dashboard.
Value by Horizon

Each horizon delivers before the next begins.

The three horizons from TOGAF Phase F (Page 03) are restated here through an adoption lens. Each card answers: what is delivered, who feels it first, and what makes the next horizon an easier approval than the current one.

Horizon 1
Months 1–3
Foundation & Compliance
Delivered
GCP infrastructure — Terraform, VPC-SC, IAM, CMEK, all security controls live
HITL framework — state machine + Firestore audit store deployed and tested
XAI layer — SHAP integrated on all 3 existing production ML models
Model Cards — Article 11 documentation complete for all 3 models
Unified asset telemetry pipeline — 6 regional systems on one schema
Salesforce Developer Edition — integration live and tested
Who feels it first CCO (S-02) — compliance posture established before Q2 2026 review. Enterprise Architect (S-08) — entire infrastructure in code for the first time.
What makes H2 easier: The platform H2 builds on already exists and has been running in production. H2 modules are features on a proven infrastructure, not a new system.
Horizon 2
Months 4–8
Core Module Deployment
Delivered
ContractGuard — clause scoring + Legal HITL operational
RevRec AI — ASC 606 classification + Finance Controller HITL + SAP write
Asset IQ — RUL model + fleet anomaly detection in production
FinRisk Sentinel — streaming anomaly monitoring live
Vertex AI MLOps — pipelines, drift detection, Model Registry for all new models
App dashboards — UI for all four deployed modules
Who feels it first Finance Controller — first month-end close with ML-assisted classification. Field Service Manager — first predictive maintenance alert with SHAP sensor context.
What makes H3 easier: Finance and Field Service have now experienced the HITL workflow. Trust in the models is established through observed decisions. H3 modules (Sales Agent, GreenOps) are lower-risk because the organisation has internalised the pattern.
Horizon 3
Months 9–18
Full AE Suite & Optimisation
Delivered
CCAI Sales Agent — full ADK deployment, Salesforce integration, AE escalation flow
GreenOps Platform — carbon-aware scheduling, ESG metrics, CSRD reporting
Strategy Dashboard — C-suite unified view across all 8 modules
Data Governance module — quality, lineage, schema validation
Cross-module optimisation — shared feature pipelines, joint drift monitoring
EU AI Act full compliance certification readiness
Who feels it first Account Executive — first deal where the agent handled qualification and the AE entered with a complete briefing. CTO (S-01) — Strategy Dashboard provides the unified view that was previously 4 manual data pulls.
End state: The complete Autonomous Enterprise — all 8 modules operational, full audit trail, EU AI Act compliance posture, GCP infrastructure in code, and every architecture decision documented in the ADR index.
Adoption Risks

Four risks. Each with an architectural mitigation.

These are the risks that cause enterprise AI programmes to stall or fail — not the technical risks (those are addressed in the architecture) but the organisational and process risks that no diagram can eliminate on its own. Each one has a specific architectural design decision that reduces it.

Risk 01 — Change Management
The Finance team has been doing this manually for fifteen years
The Finance Controller and their team have fifteen years of muscle memory around manual ASC 606 classification. An AI system that replaces their judgment without warning will be rejected — not because the model is wrong, but because it bypasses the trust-building process entirely. This is the most common failure mode in enterprise AI adoption.
Architectural mitigation
The HITL framework (Page 04, §07) is the change management strategy encoded in architecture. RevRec AI never posts to SAP without Finance Controller approval — not in Horizon 2, not in Horizon 3, not ever. The model earns trust by presenting its reasoning (SHAP explanation, comparable transactions, confidence score) and waiting for human confirmation. The override rate metric (tracked in Vertex AI Monitoring) is the adoption progress indicator — as overrides decline, trust is growing. The transition from human-in-the-loop to human-on-the-loop happens at the Finance team's pace, not the project plan's.
HITL-04 · Page 04 · Vertex AI Monitoring · Drift Detection Page 06
Risk 02 — Data Quality
Six regional asset systems with no common schema and unknown data quality
The Asset IQ module depends on telemetry from 6 regional systems that have never been unified. The assumption that these systems produce clean, consistent, complete data is almost certainly wrong. Garbage-in, garbage-out is not a data engineering cliché — it is the operational reality of every OEM that has grown through acquisition. A model trained on clean demo data that fails on live production telemetry destroys trust faster than no model at all.
Architectural mitigation
The Horizon 1 data fabric deliverable (Page 03, Phase F) is specifically sequenced to address this before any ML model touches production data. The Pub/Sub ingestion pipeline with schema validation, the Data Governance module with quarantine logic, and the Feature Store lineage tags are all Horizon 1 — not Horizon 2. The Asset IQ RUL model is not deployed until the Vertex AI Feature Store can demonstrate that every feature value has a validated source event behind it. The data quality score threshold for inclusion in ML pipelines (documented in FRD-07, Page 04) must be met before training begins — not after.
FRD-07 · Page 04 · Data Governance · Feature Store lineage · Page 06
Risk 03 — Regulatory Approval Timeline
EU AI Act Annex III compliance cannot be certified on the day of the review
EU AI Act compliance is not a box to tick before a regulatory review — it is a documented posture that must be demonstrable at any point in time. A compliance dashboard that went green two days before the Q2 2026 review will not satisfy an auditor. The audit trail, the HITL records, the Model Cards, and the SHAP explanations need to have been accumulating for long enough to be credible as a systematic practice, not an emergency preparation.
Architectural mitigation
The Horizon 1 sequencing places the compliance infrastructure — HITL framework, XAI layer, Model Cards — at the start of the programme, not at the end. By the time the Q2 2026 review arrives, the HITL audit trail will have months of immutable records in Firestore and BigQuery, the SHAP explanations will have been running on every production inference, and the Model Cards will have been through at least one HITL-11 promotion review. The compliance posture is established through consistent architectural practice — not through documentation written the week before the review.
HITL-11 · Model Cards · Page 06 · EU AI Act Art. 9 · Horizon 1 sequencing · Page 03
Risk 04 — Integration Complexity
Salesforce and SAP integrations are where enterprise AI projects go to die
The AE touches two of the most complex enterprise systems in existence — Salesforce CPQ and SAP S/4HANA. Integration projects involving these systems are notorious for scope creep, timeline overruns, and late-discovered data model mismatches. A real-time ML inference layer that depends on both systems for its input data and writes back to both for its outputs has a high surface area for integration failure.
Architectural mitigation
Three design decisions directly address this. First, ADR-001 (Salesforce Developer Edition REST API) means the Salesforce integration is validated in a free, permanent sandbox before any production credentials are involved — integration failures are discovered in dev, not in front of the CCO. Second, the SAP write for RevRec AI is the only irreversible AE action, and it is protected by a mandatory HITL approval record ID as a parameter — the integration cannot be called without a documented human decision preceding it. Third, the Pub/Sub event fabric (ADR-006) decouples the AE from both systems — the AE reads events from topics, not from direct system APIs, which means a Salesforce or SAP outage does not cascade into AE failures.
ADR-001 · ADR-006 · Page 03 · SAP write guard · Page 05
Architecture Decision Record

One GTM decision worth documenting.

Only one GTM decision rises to the level of an ADR — the phased adoption approach itself. Every other commercial decision (pricing, packaging, sales motion) belongs to ClaraVis's commercial team, not the architecture record.

ADR-016
Phased adoption over big-bang full-suite deployment
Status
Accepted · Phase GTM Design
Context
ClaraVis has a hard regulatory deadline (Q2 2026 EU AI Act review), limited internal change management capacity, two complex legacy system integrations, and 9 stakeholders with different adoption readiness levels. Full suite deployment in a single programme would require all of these to be addressed simultaneously.
Decision
Deploy in three horizons. H1 delivers the compliance infrastructure and data foundation. H2 delivers the core business modules on that foundation. H3 delivers the full suite on the proven platform. Each horizon is independently valuable — ClaraVis can pause after H1 or H2 and retain the value already delivered.
Alternative considered
Full AE suite deployed as a single programme. Rejected because: (1) the EU AI Act deadline cannot wait for the full programme to complete; (2) change management load across all 9 stakeholders simultaneously is organisationally untenable; (3) if the programme stalls, there is no partial value delivered — the entire investment is at risk.
Consequences: The phased approach requires that the Horizon 1 infrastructure is designed to support H2 and H3 modules from the start — not retrofitted later. This is why the Terraform modules, VPC-SC perimeter, and Pub/Sub event bus are H1 deliverables even though they are not user-visible. The architecture absorbs the upfront design cost in exchange for eliminating the integration risk that big-bang deployments accumulate at the end. Every subsequent horizon is an extension of a proven platform, not a new system.
Accepted · Phase GTM Design
Next in the Portfolio
Strategy complete.
The modules follow.

Page 09 is the AE Suite index — a full-depth page for each of the eight modules. Each module page carries its own system context diagram, architecture overview, agent state machine, data flow diagram, ADRs, stakeholder rebuttals, and a scripted demo pathway. These are the pages a technical interviewer or client will open first.

PG 09
AE Suite — 8 Module Pages
System context · Architecture · State machines · ADRs · Rebuttals · Demo pathways
In Design
PG 07
← Infrastructure & GCP Architecture
The platform this adoption strategy runs on