These are the objections that will be raised in the ClaraVis engagement — and the objections an interviewer will probe. Each rebuttal is grounded in a specific design decision already documented in the portfolio. The answer is never "trust us" — it is always "here is the architectural evidence."
CTO · S-01
Why does a classification model need to be this complex?
"We already have a Finance team that does this manually. Why do we need XGBoost, SHAP, Vertex AI Pipelines, a Feature Store, and a HITL framework just to classify a contract?"
Architectural response
The complexity is not in the model — XGBoost is a relatively simple algorithm. The complexity is in the compliance obligations. The EU AI Act requires a documented explanation for every high-risk AI decision. The FDA requires a change control record for every software modification that affects a production AI system. ASC 606 requires a defensible classification basis for every revenue recognition event. Each of those obligations drives a specific architectural component. The SHAP layer satisfies EU AI Act Art. 13. The HITL-11 pipeline gate satisfies FDA change control. The Feature Store lineage satisfies the ASC 606 documentation requirement. Remove any one of those components and a specific regulatory obligation becomes unmet. The architecture is as simple as the regulatory environment allows it to be.
Evidence in design: Architecture Principles P-03 (compliance as write-path constraints) · ADR-010 (XGBoost chosen for SHAP determinism, not arbitrary) · Page 03 ADM Phase D
CCO · S-02
Will this satisfy the EU AI Act auditor?
"When the regulator asks me to demonstrate EU AI Act Annex III compliance for this system, what exactly do I show them? An architecture diagram isn't an audit trail."
Architectural response
The audit evidence package is queryable from BigQuery in under 30 seconds. For any inference: SELECT * FROM ae_audit.shap_explanations WHERE transaction_id = 'X' returns the exact SHAP values presented to the Finance Controller. SELECT * FROM ae_audit.hitl_events WHERE hitl_id = 'Y' returns the approval record — approver identity, decision, reason code, timestamp, and the SHAP explanation ID it references. The SHAP explanation is written to BigQuery before the HITL checkpoint is created — so the audit record cannot be retrospectively amended to match the decision. The Model Cards for the ASC 606 model are versioned in Vertex AI Model Registry alongside the model itself — the model the auditor can inspect is the same model that produced the classifications in the audit trail.
Evidence in design: HITL-04 specification · Page 04 · BigQuery shap_explanations schema · Page 06 Model Card · EU AI Act Art. 11/13/14 compliance checklist · RevRec AI Model Card
Finance Controller
I don't trust a black-box model to classify revenue
"I've been classifying these contracts for twelve years. I know when something is a multi-element arrangement. I'm not going to let an AI decide this and just rubber-stamp it."
Architectural response
You won't be rubber-stamping it — you'll be reviewing it with more information than you have today. The HITL-04 queue shows you the model's classification, the five contract features that drove it, their direction and magnitude, and three comparable contracts you have personally approved in the past. If the model is wrong, you override it with a reason code and the override becomes a training example for the next model version — making the model learn from your judgment. The model does not replace your expertise. It pre-processes the contract, surfaces the most relevant features, and presents a recommendation. The decision — and the immutable record of it — remains yours. The SAP GL write will not execute without your explicit approval on this specific checkpoint.
Evidence in design: HITL-04 UI mockup above · SHAP chart · comparable transactions · override with reason code · ADR-R02 (HITL required for all classifications)
Enterprise Architect · S-08
How does this integrate with our Salesforce data model without customising it?
"We have a heavily customised Salesforce org. If RevRec AI needs specific fields or objects that don't exist in our standard Contract object, that's a Salesforce project on its own."
Architectural response
RevRec AI reads from Salesforce using the standard Contract and OpportunityLineItem objects — no custom fields, no custom objects, no Salesforce configuration changes required. The 18 contract features are all derivable from standard Salesforce fields: contract value from Amount, payment terms from PaymentTerms, SKU complexity from OpportunityLineItems count and service ratio. The Feature Store computation logic handles the feature engineering from standard fields. RevRec AI writes back a single Activity record on the Contract object using the standard Activity API — again, no custom fields. The Salesforce Developer Edition integration (ADR-001) was specifically designed and tested against the standard object model to validate this claim before the architecture was frozen.
Evidence in design: ADR-001 (SFDC Developer Edition validation) · Feature Store contract feature group definition · Page 06 · standard object read list · Page 05 agent tool manifest
CISO · S-09
Does the SAP write send ClaraVis financial data outside the VPC-SC perimeter?
"When RevRec AI posts to SAP, is that financial transaction data leaving our GCP VPC-SC boundary? SAP is on-premise — that data is crossing a network boundary I need to understand and approve."
Architectural response
The SAP write in the portfolio design uses a mock BigQuery table for the demo environment — SAP is on-premise and the production integration would be via a SAP BTP Event Mesh bridge or RFC middleware, both of which are standard enterprise integration patterns documented in ADR-R01. The data flow is directional: RevRec AI sends a classification result and performance obligation tags to SAP — it does not receive or store SAP financial data within the GCP environment. The HITL approval record and audit trail stay entirely within the VPC-SC perimeter in Firestore and BigQuery. The SAP write message carries only the classification output — contract ID, recognition type, performance obligation tags, and the HITL record ID. No sensitive financial transaction data from SAP is pulled into the GCP environment at any point in the RevRec AI workflow.
Evidence in design: Page 07 VPC-SC perimeter diagram · ADR-R01 (SAP write as output-only) · Page 05 agent tool manifest (sap.post_journal_entry parameters) · Page 03 ADR-006 (Pub/Sub event bus for decoupling)
CFO · S-03
Why does every single classification need human approval?
"If the model is right 94% of the time with confidence above 0.90, why are we paying a Finance Controller to approve thousands of routine transactions? Auto-approve the high-confidence ones and use HITL for the edge cases."
Architectural response
ADR-R02 documents this decision in full — the short answer is that "every classification has a human approval record" is a cleaner compliance posture than "every classification above threshold X has a human approval record." The threshold choice, the threshold-setting methodology, and the monitoring for threshold drift all become additional compliance obligations. The Finance Controller's actual workload is lower than it sounds: the HITL-04 queue presents the classification with a SHAP chart and comparable transactions — a straightforward approval takes approximately 90 seconds. The volume of contracts ClaraVis signs per month is not so large that this creates an operational bottleneck. And the override decisions — which would not exist under auto-approval — are the highest-value training signal for improving the model over time. The cost of auto-approving high-confidence classifications is not just regulatory: it removes the feedback loop that makes the model better.
Evidence in design: ADR-R02 · HITL-04 UI mockup (90-second approval flow) · Page 06 Model Card (override rate metric as quality signal) · Page 06 concept drift detection (override rate as drift indicator)