The Autonomous Enterprise · AE Suite · Module 01

CCAI Sales Agent
Conversational Qualification
— 11 turns. Then the AE takes over.

Every inbound sales inquiry is handled autonomously for the first 11 conversational turns — qualifying intent, validating product fit, configuring a BOM, and creating the Salesforce Opportunity. At turn 11, the Account Executive receives a complete briefing package: BANT scores, validated BOM, stated objections, and a suggested opening line. The AE enters every conversation ready, not catching up.

EU AI Act — Limited Risk · Art. 52 Google ADK · CCAI · Dialogflow CX Salesforce · Opportunity create HITL-01 · Turn-11 escalation A2A · Multi-agent · RAG Commercial ART · H3 · PI-6
System Context — C4 Level 1

Conversation in. Qualified Opportunity out. AE always in the loop at turn 11.

The CCAI Sales Agent's context is defined by its conversation boundary. Everything before the agent is the prospect — reaching in via web chat, phone, or email. Everything after the agent is the Account Executive — receiving a complete briefing package and joining a conversation that has already been qualified, configured, and Opportunity-created in Salesforce.

Architecture — Multi-Agent Orchestration

One Orchestrator. Four sub-agents. A2A between every handoff.

The CCAI Sales Agent is an ADK multi-agent system — not a single LLM with a long system prompt. The Orchestrator routes each conversational turn to the most appropriate sub-agent based on detected intent. Sub-agents communicate via A2A protocol. The Salesforce Opportunity is created by the CPQ Agent at turn 6 — not at escalation — so the AE always has a CRM record to reference immediately.

Conversation State Machine

Dialogue-driven states. Not data events.

The CCAI Sales Agent FSM is driven by turn progression and intent detection — not by data pipeline events. Every state transition depends on what the prospect said, what the agent scored, and how many turns have elapsed. HITL-01 is not a risk threshold — it is a conversational boundary reached at turn 11.

Data Flow — Conversation Sequence

Eight turns. One qualified Opportunity. AE enters at turn 11.

An inbound inquiry from the procurement officer at University Hospital Frankfurt — a new account, not München. The agent qualifies the lead in five turns, configures a BOM for a MRI-3T + service contract in four turns, proposes at turn 10, and escalates at turn 11 with a complete briefing package. The Opportunity was created in Salesforce at turn 6 — the AE already has a CRM record when they join.

HITL-01 — AE Briefing Package

The AE enters the conversation ready. Not catching up.

HITL-01 is architecturally different from every other HITL in the suite. The AE is not approving or rejecting a model decision — they are receiving a complete briefing package and joining a live conversation. The quality of the handoff determines the quality of the first impression. Summary cards at the top for the 30-second scan. Full transcript below for the exact words.

HITL-01 · Account Executive Briefing — University Hospital Frankfurt · OPP-2026-UKF-001
CCAI Sales Agent — HITL-01 · AE Briefing · OPP-2026-UKF-001 · Prospect waiting · Join now
BANT Qualification Summary
Budget
€2.5M
✓ Confirmed · H1 2026 approved
Authority
Head of Procurement
✓ Confirmed · signatory authority
Need
MRI-3T replacement
✓ Replacing 12yr Siemens MAGNETOM
Timeline
Q2 2026
✓ Confirmed · installation by June
Stated Objections & Interests
Concern: bore diameter (wants 70cm+) Concern: installation disruption to cardiology suite Interest: cardiac imaging software package Interest: 5-year service with 4h response SLA Mentioned: "comparing with Siemens and Philips"
Validated Bill of Materials
SKUDescriptionQtyUnit price
MRI-3T-70ClaraVis MRI-3T · 70cm wide-bore · cardiac package1€1,780,000
SVC-PREM-5YPremium Service Contract · 5yr · 4h response SLA1€420,000
SW-CARDIAC-V3Cardiac Imaging Software Suite · perpetual licence1€65,000
Total BOM value€2,265,000
Suggested Opening Line
AI-SUGGESTED · REVIEW BEFORE SENDING
"Hello, this is [your name] from ClaraVis. Thank you for your interest in upgrading your cardiology suite — I've reviewed your requirements and I'd love to discuss how our MRI-3T wide-bore model addresses the 70cm bore specification you mentioned, and walk you through the cardiac imaging package options."
Conversation Transcript — All 11 Turns
CCAI SALES AGENT · T1
Hello, thank you for contacting ClaraVis. I'm an AI assistant and I'll help you explore our MRI and CT imaging solutions. Could you tell me more about what you're looking for?
PROSPECT · T1
Hi, we're University Hospital Frankfurt. We're looking at upgrading our cardiology MRI suite — currently running a 12-year-old Siemens system and we need to replace it.
CCAI SALES AGENT · T2
A cardiology MRI upgrade sounds like a significant project. To make sure I connect you with the right advisor, could you share your approximate budget range for this project?
PROSPECT · T2
We have €2.5 million approved for the first half of 2026.
CCAI SALES AGENT · T3–4 (condensed)
Confirmed authority (Head of Procurement, signatory), timeline (Q2 2026, installation by June), and need (replace Siemens MAGNETOM, cardiology focus). Noted comparison with Siemens and Philips.
PROSPECT · T5
One thing that's important — our previous MRI had a 60cm bore which made patient positioning difficult. We really need at least 70cm for our cardiac patients.
CCAI SALES AGENT · T6–9 (condensed)
Confirmed wide-bore requirement. Configured MRI-3T 70cm + 5yr Premium Service + Cardiac Software. Created Salesforce Opportunity OPP-2026-UKF-001. Answered bore diameter question (70cm standard on MRI-3T). Addressed installation timeline — 3-day planned outage with weekend scheduling option.
CCAI SALES AGENT · T10
Based on your requirements, I've put together a configuration: the MRI-3T Wide-Bore (70cm) with Premium 5-year service and the Cardiac Imaging Software Suite — total €2,265,000, within your approved budget. An advisor from ClaraVis will join us shortly to discuss the details and answer any technical questions.
— Turn 11: Agent hands off · AE briefing package generated · "An advisor from ClaraVis will join you shortly." —
→ Join Conversation Now
Open OPP-2026-UKF-001 in Salesforce
Prospect Detail
Organisation
University Hospital Frankfurt
Department
Cardiology · Radiology Dept
Contact
Head of Procurement
(name not yet captured)
Channel
Web chat · claravis.com/mri
Turns completed
11 of 11 · HITL-01 triggered
Salesforce Opportunity
OPP-2026-UKF-001
Created at turn 6 ✓
Stage: Qualification Complete
Competitive mention
Siemens · Philips
Active comparison
EU AI Act Art. 52 ✓
Prospect was disclosed at turn 1 that they were speaking with an AI assistant. Firestore HITL record committed before you joined.
Priority notes
Bore diameter is a hard requirement — do not suggest 60cm options.

Installation disruption concern — weekend scheduling is available and was mentioned.

Competitive: prospect specifically mentioned Siemens. Lead with cardiac imaging software differentiation.
Architecture Decision Records

Four CCAI Sales Agent decisions. Every alternative documented.

ADR-007 (restated)
Google ADK over LangGraph or CrewAI for multi-agent orchestration
LangGraph and CrewAI were evaluated as the multi-agent framework. Both are mature open-source options with large communities. Rejected for three reasons: (1) ADK is the native GCP multi-agent framework — it integrates directly with CCAI, Dialogflow CX, Vertex AI, and Cloud Run without adapter layers. LangGraph and CrewAI require custom integration code for each GCP service. In a portfolio that is 100% GCP, native integration is always preferable. (2) ADK's A2A protocol (ADR-008) is a standardised inter-agent communication contract — agent definitions are portable and replaceable without changing the Orchestrator. LangGraph's graph-based agent communication is framework-specific. (3) ADK has first-class support for CCAI voice channel integration — the same agent definition handles web chat and phone calls without modification. The CCAI Sales Agent needs to handle both channels for ClaraVis's inbound sales flow.
Accepted · Phase Agent Design · Page 05 ADR-007
ADR-008 (restated)
A2A protocol over direct HTTP for inter-agent communication
Direct HTTP calls between the Orchestrator and sub-agents were the initial design for simplicity. The Orchestrator would call the Qualification Agent's Cloud Run URL directly. Replaced with A2A for two reasons: (1) A2A defines a typed message contract between agents — the Orchestrator sends a structured task message and receives a structured response. Direct HTTP calls use untyped JSON payloads that are only validated at runtime. A typed contract means that if the Qualification Agent's response schema changes, the build fails at the A2A schema validation step rather than at runtime in a live conversation. (2) A2A enables agent replaceability — the Orchestrator doesn't know the implementation details of any sub-agent, only the A2A contract. The CPQ Agent can be upgraded, replaced with a different implementation, or A/B tested without changing the Orchestrator. This is the architectural property that makes the multi-agent system maintainable over time.
Accepted · Phase Agent Design · Page 05 ADR-008
ADR-SA01 — CCAI Sales Agent specific
Turn-11 escalation boundary — not turn 5, not turn 20
The escalation boundary was the most debated design parameter. Turn 5 was proposed by the Sales team — get the AE involved earlier. Turn 20 was proposed by Operations — maximise agent qualification depth before AE involvement. Both were rejected. Turn 5 rejected because five turns is not enough to complete BANT qualification and begin product configuration — the AE would receive an incomplete briefing package and would need to re-qualify the prospect, which is exactly what the agent is designed to prevent. Turn 20 rejected because a prospect engaging with an AI agent for 20 turns before a human joins is a poor sales experience — it signals that ClaraVis is trying to minimise AE time at the cost of prospect relationship quality. Turn 11 is the boundary where: BANT qualification is complete (turns 2–5), a BOM has been configured and validated (turns 6–9), a pricing summary has been presented (turn 10), and the agent has established enough context for the AE to enter with genuine value. The AE at turn 11 is not interrupting the conversation — they are joining at the natural handoff point where technical discussion begins and commercial relationship starts.
Accepted · Phase Product Design · CCAI Sales Agent module
ADR-SA02 — CCAI Sales Agent specific
Salesforce Opportunity created at turn 6, not at HITL-01 escalation
The initial design created the Salesforce Opportunity at the moment of HITL-01 escalation — once the briefing package was generated and the AE was notified. Rejected and moved to turn 6 for two reasons: (1) The AE's first action after reading the briefing package is to open Salesforce. If the Opportunity doesn't exist yet, the AE wastes 30–90 seconds creating it manually — which is precisely the administrative task the agent is designed to eliminate. By creating the Opportunity at turn 6 (the start of configuration), the AE always has a CRM record that predates their involvement. (2) Salesforce pipeline data is more accurate when the Opportunity is created earlier — management reporting that counts Opportunities at configuration stage (turn 6) includes more leads than reporting that counts at AE join (turn 11). Earlier Opportunity creation improves pipeline visibility and forecast accuracy.
Accepted · Phase Agent Design · CCAI Sales Agent module
Stakeholder Rebuttals

Six objections. Each with an architectural answer.

CTO · S-01
Why ADK over LangChain — the team already knows LangChain?
"Our engineering team uses LangChain. ADK is newer and less documented. The productivity cost of learning a new framework while building a production system is significant. Why not use what the team knows?"
Architectural response
ADR-007 documents this in full. The short answer is that the CCAI Sales Agent has three integration requirements that ADK handles natively and LangChain handles via custom adapters: CCAI (for phone and web chat), Dialogflow CX (for voice channel routing), and A2A protocol (for typed inter-agent communication). Each adapter is code that the team has to write, test, maintain, and debug. ADK provides all three out of the box. The team productivity argument runs both ways: the team saves time on integration work by using ADK, even if they spend time learning it. The documentation gap is real but temporary — ADK is Google's strategic multi-agent framework and documentation is actively expanding. For a production system that will be maintained for multiple years on a GCP-only architecture, native integration is the correct long-term choice over familiarity.
Evidence: ADR-007 · ADR-008 (A2A typed contracts) · Page 05 CCAI Sales Agent spec (CCAI + Dialogflow CX native integration)
CCO · S-02
Does a conversational sales agent fall under the EU AI Act?
"The EU AI Act has specific rules about AI in commercial interactions. Does the CCAI Sales Agent need to comply? And what are the obligations if it does — particularly around disclosure?"
Architectural response
The CCAI Sales Agent falls under EU AI Act Article 52 — the transparency obligation for AI systems that interact with natural persons. Article 52 applies to any AI system intended to interact with humans in real time, including conversational agents. The obligation is disclosure: the prospect must be informed that they are interacting with an AI system unless the context makes this obvious. The architecture satisfies this at turn 1 — the greeting message includes an explicit disclosure ("I'm an AI assistant"). This is not a compliance bolt-on — it is built into the Greeting state of the FSM. The CCAI Sales Agent does not fall under Annex III High Risk because it does not make consequential decisions about the prospect — it qualifies, configures, and briefs. The AE makes the consequential commercial decisions. The agent's EU AI Act classification is Limited Risk (Article 52), not High Risk (Annex III). The documentation obligation is lighter: the disclosure at turn 1 is the primary requirement, and the Firestore conversation audit log satisfies any retrospective transparency requirement.
Evidence: GREETING state (Art. 52 disclosure at T1) · Firestore conversation audit log · State machine HITL-01 (AE makes commercial decisions, not the agent)
Account Executive · S-04
Will the agent lose deals by sounding robotic before I join?
"Medical imaging sales are relationship-based. A hospital procurement officer who feels like they've been handled by a chatbot for 11 turns is not going to be in a buying frame of mind when I join. How do you prevent the agent from damaging the relationship before I even get involved?"
Architectural response
The quality of the agent's conversational experience is a product design and prompt engineering question — not an architecture question. The architecture addresses the AE's concern in two ways. First, the early escalation branch means the prospect can request a human at any turn — if they say "I'd prefer to speak to a person", the agent immediately triggers HITL-01 without waiting for turn 11. The agent never holds a prospect who wants to talk to a human. Second, the Handoff Agent's suggested opening line is designed to make the AE's entry feel natural, not like a handoff — the suggestion references what the prospect actually said ("the 70cm bore specification you mentioned") which demonstrates continuity of conversation. The architectural commitment is that the agent never overpromises, never misrepresents, and always has an early escalation path. The conversational quality within those constraints is addressed by the Gemini system prompt and the qualification agent's tone guidelines — not by the architecture.
Evidence: Early escalation branch (any turn, prospect-triggered) · Handoff Agent (suggested opening line personalised to conversation) · ADR-SA01 (turn-11 is the natural handoff point, not an interruption)
Enterprise Architect · S-08
How does the agent handle product questions it doesn't know the answer to?
"The Knowledge Agent uses RAG over the product catalogue. But MRI systems have hundreds of technical specifications and configuration options. What happens when a prospect asks a question that isn't in the knowledge base — or asks something that requires clinical expertise to answer correctly?"
Architectural response
The Knowledge Agent has a confidence threshold of 0.72 — if the RAG retrieval score for a question falls below this threshold, the agent does not attempt to answer. Instead, it says: "That's a great question — let me make sure you get a precise answer to that. Our specialist will be able to address that when they join shortly." This response is honest, maintains trust, and does not risk a wrong technical answer that could later undermine the sales process. Questions below the confidence threshold are logged to the HITL-01 briefing package as "unanswered questions — requires AE or clinical specialist response." The AE sees exactly what the prospect asked and what wasn't answered, so they can address it immediately when they join. The architecture is designed to fail gracefully on unknown questions rather than hallucinate a confident-sounding wrong answer.
Evidence: Knowledge Agent spec (confidence threshold 0.72) · HITL-01 briefing package (unanswered questions field) · Page 05 circuit breaker (Knowledge Agent fallback on sub-threshold confidence)
CISO · S-09
Conversation transcripts — data residency and retention?
"Every conversation is stored in Firestore as a full transcript. These conversations may contain names, roles, budget figures, procurement strategies — commercially sensitive information from hospital procurement officers. What is the data residency policy and how long are transcripts retained?"
Architectural response
Firestore conversation state is stored in the europe-west3 region, within the VPC-SC perimeter. The transcript data is classified as Confidential — it contains personally identifiable information (prospect name, role) and commercially sensitive data (budget, procurement intent). Transcript retention is 90 days in Firestore — after which the transcript is deleted and only the structured HITL-01 record (BANT scores, BOM, Salesforce Opportunity reference) is retained in BigQuery with a 7-year audit retention. The prospect's name and contact details are written to the Salesforce Contact object at turn 6 (when the Opportunity is created) — from that point, Salesforce is the system of record for prospect PII, and the Firestore transcript holds only the conversation text. Access to Firestore conversation transcripts is restricted to the CCAI SA (read-write) and the Sales Operations role (read-only for training data purposes, requires Data Governance approval). The CCAI agent itself has no direct access to previous conversations from other sessions — each session starts fresh from the Firestore session_id.
Evidence: Page 07 Firestore IAM (CCAI SA + Sales Ops read-only) · Data classification: Confidential · 90-day transcript retention · Salesforce Contact as PII system of record · VPC-SC europe-west3
CFO · S-03
Cost per qualified lead vs current AE time on qualification calls?
"What does the CCAI Sales Agent actually save? If an AE currently spends 45 minutes on an initial qualification call, and the agent handles 11 turns of qualification, what is the cost saving per lead — and what's the infrastructure cost of the agent itself?"
Architectural response
Current state: an AE spends approximately 45 minutes on an initial qualification call, of which approximately 20 minutes is structured BANT qualification that produces the same information the agent captures in 11 turns. At €120/hour fully loaded AE cost, that's €40 of AE time per qualification call. The CCAI Sales Agent cost per conversation: Cloud Run at minimum 2 instances (~€0.04/hour each) + Gemini 1.5 Flash per turn (~€0.001 per turn × 11 turns = €0.011) + Gemini 1.5 Pro for briefing generation (~€0.08) + Vertex AI RAG (~€0.02) = approximately €0.12 per fully qualified conversation. At 200 inbound inquiries per month, the agent saves approximately €8,000 in AE qualification time at a running cost of approximately €24/month. The 333× cost ratio is not the right metric — the real value is that the AE time saved is redeployed to later-stage deal work where AE relationships matter. The CCAI Sales Agent does not replace AE relationships — it protects AE time for the conversations where relationships are built.
Evidence: Cloud Run pricing · Gemini 1.5 Flash per-token pricing · Page 08 GTM (AE time redeployment to H3 value) · ADR-SA01 (turn-11 boundary maximises agent value before AE joins)
Demo Pathway

Three minutes. Eleven turns. AE enters with everything.

The demo uses the University Hospital Frankfurt scenario — a new account, not München, showing the agent working on a fresh lead with no Salesforce history. The goal is to show the full autonomous qualification and configuration flow, then the HITL-01 briefing package as the payoff.

00
Setup · 30s before
Open the CCAI web chat widget and the Salesforce Opportunities view
Open the ClaraVis demo web chat widget (simulating the claravis.com/mri page). Open Salesforce Opportunities filtered to University Hospital Frankfurt — show it's empty. This establishes the before state: no lead, no CRM record, no AE awareness. Have the HITL-01 AE briefing UI open in a second tab, minimised.
CCAI web chat widgetSalesforce Opportunities
01
Turn 1 · 0:00
Send the first message — show the AI disclosure
Type the first message in the web chat: "Hi, we're University Hospital Frankfurt. We're interested in upgrading our cardiology MRI suite." Watch the agent respond with the turn 1 greeting that includes the EU AI Act Article 52 disclosure: "I'm an AI assistant." Show the Cloud Run logs beginning: "Session created · prospect message received · routing to Orchestrator."
"The first thing the agent does is tell the prospect they're speaking with an AI. That's not a legal nicety — it's EU AI Act Article 52. And it's built into the FSM, not remembered by the prompt. You cannot get to turn 2 without the disclosure having been delivered."
CCAI web chatOrchestrator · Cloud RunFirestore session create
02
Turns 2–5 · 0:25
Run through the BANT qualification turns — watch the scores update
Continue the conversation, providing BANT signals: budget €2.5M (T2), authority Head of Procurement (T3), need replacing 12yr Siemens MAGNETOM (T4), timeline Q2 2026 (T5). After each turn, show the Firestore document updating in real time — the BANT scores changing from partial to confirmed. Point to the Qualification Agent routing in the Cloud Run logs.
"Every turn, the Qualification Agent extracts the BANT signal and updates the Firestore session state. Budget confirmed. Authority confirmed. Need confirmed. Timeline confirmed. By turn 5, all four BANT criteria are green — the agent moves to configuration automatically."
Qualification AgentFirestore BANT scoresA2A routing
03
Turn 6 · 1:20
Mention bore diameter — watch CPQ Agent configure and create the Opportunity
Type: "One requirement — we need at least 70cm bore for our cardiac patients." Watch the CPQ Agent activate in logs. Show the Salesforce Opportunities view — the Opportunity OPP-2026-UKF-001 appears in real time as the CPQ Agent creates it. The BOM builds with the 70cm wide-bore model, 5yr service, cardiac software package.
"Turn 6 — the CPQ Agent creates the Salesforce Opportunity. Not at escalation — now, at the moment the configuration starts. Switch to Salesforce — the Opportunity is already there. The AE doesn't need to create anything. By the time they join the conversation, the CRM is already updated."
CPQ AgentSalesforce Opportunity createBOM validation
04
Turn 11 · 2:10 — the moment
Watch the turn-11 handoff — open the AE briefing package
The agent reaches turn 10 and delivers the pricing summary. At turn 11, the Handoff Agent triggers in the logs: "Turn 11 reached · generating briefing package · creating HITL-01." The web chat shows: "An advisor from ClaraVis will join shortly." Switch to the AE briefing tab — it's now populated with the full briefing package.
"This is what the Account Executive sees when they get the notification. Four BANT cards — all confirmed. Objections and interests as pills — bore diameter concern, competitive mentions. Validated BOM with pricing. The suggested opening line references exactly what the prospect said about bore diameter. And the full transcript below for the exact words. The AE joins this conversation knowing everything. They don't start from scratch. They continue from where the agent left off — at the technical discussion, where their expertise actually adds value."
Handoff AgentFirestore HITL-01AE notificationBriefing package
05
Salesforce · 2:50
Show the Salesforce Opportunity with the full qualification data attached
Open the Salesforce Opportunity OPP-2026-UKF-001. Show: Stage (Qualification Complete), Amount (€2,265,000), BOM in the product lines, Contact (Head of Procurement, UKF), and the Activity log entry from the CCAI Sales Agent with the HITL-01 reference. This is the Salesforce record the AE and sales management will use for pipeline reporting.
"The Salesforce record has everything — BANT-qualified, BOM-validated, competitive mentions noted, objections logged. Sales management has a pipeline entry. The AE has a CRM record. The agent created all of this in 11 turns — about 8 minutes of conversation. The 3–5 day time-to-qualified-AE from Page 02 just became 8 minutes."
Salesforce OpportunityActivity logHITL-01 reference
AE Suite Navigation
Five modules complete.
Three remain.

CCAI Sales Agent closes the Commercial domain. The remaining three — Data Governance, GreenOps, Strategy Dashboard — complete the Operations and Platform domains.

PG 09
← AE Suite Index
All 8 modules · dependency matrix · demo pathway index
M-02
ContractGuard — Module 02
Contracts qualified by this agent flow here next