Page 10  ·  The Autonomous Finance

Architecture
Decision Records.

Six binding decisions — each encoding a constraint that shaped the system's structure. Written at design time, stored in version control, referenced by every agent that touches financial data. Not post-hoc documentation. The architecture is the argument.

ADR Count 6 Records
Status 5 Accepted · 1 Under Review
Scope IC · Treasury · AP · Infrastructure
Compliance Hooks EU AI Act · GDPR · PIPEDA · ISO 20022
ADR-AF-01 · SAP Write Gate ADR-AF-02 · IC Anomaly Ensemble ADR-AF-03 · Cash Forecast Ensemble ADR-AF-04 · Agent Service Topology ADR-AF-05 · PIPEDA Dataset Separation ADR-AF-06 · ISO 20022 Bank Connectivity
ADR-AF-01  ·  Intercompany Write Gate  ·  Authority: Principal Architect  ·  Supersedes: None

SAP Write Requires Committed HITL
Approval Record ID
as Mandatory Parameter

✓ Accepted
Write-Gate Flow — IC Journal Correction
Write-Gate Flow — IC Journal Correction IC AGENT Correction proposed HITL REVIEW Controller inspects correction payload APPROVAL RECORD Committed to BigQuery approval_id returned SAP WRITE GATE Validates approval_id param — mandatory rejects if missing/stale SAP ERP Journal posted ↑ BLOCKED — no valid approval_id ↑
Context
The intercompany reconciliation agent proposes journal corrections in SAP when it detects a mismatch between entity-pair balances. Without a hard gate, a misconfigured agent, a replay attack, or a latent bug in the correction logic could write directly to the ledger without a human having ever reviewed the entry. In an EU AI Act Annex III context, automated financial decision-making requires a verifiable human approval step — an application-level flag is insufficient because it can be bypassed by direct API call, environment variable override, or deployment misconfiguration.
Decision

Every SAP BAPI or OData write that modifies the intercompany ledger must carry a committed approval record ID as a mandatory parameter. The gate validates this ID against the HITL audit table in BigQuery before executing. An absent, expired, or already-consumed ID causes the write to be rejected at the gate — no exception path exists. The approval record is written to BigQuery and considered committed only after a named controller has approved the specific correction payload. The record ID is single-use: a second write attempt with the same ID is rejected.

Alternatives Considered
Application-level flag check: A boolean field on the agent's write request (e.g. hitl_approved: true) rejected because the flag is set by the same application that requests the write. There is no structural guarantee the flag was set as a result of a real human review — it can be hardcoded in a test environment and promoted to production. The gate must be external to the agent process and must validate against a durable record.
Row-level SAP authorisation roles: Assigning the agent's service account the SAP role required to post journals, then relying on role governance. Rejected because role-based access controls who can write, not whether a specific write has been approved. The same service account would be authorised to post corrections both with and without prior HITL review — the approval constraint is not encoded in the authorisation model.
Pre-write webhook to HITL service: Call the HITL service synchronously before posting; approve inline. Rejected because inline approval collapses the time gap between proposal and approval to near-zero, defeating the purpose of human review. Controllers need time to inspect the payload, cross-reference the entity graph, and verify against the IC agreement register — a synchronous call makes that inspection window a latency parameter, not a structural requirement.
Consequences
  • No journal correction can reach SAP without a prior committed approval record — structurally enforced, not policy-enforced.
  • The approval record ID creates a tamper-evident link between every SAP posting and its HITL audit trail entry.
  • Replay protection via single-use IDs prevents duplicate posting from agent retries or infrastructure failures.
  • Infrastructure failure between gate validation and SAP write confirmation requires a new HITL approval cycle — by design. The connector distinguishes gate-passed vs. SAP-confirmed states; a gate-passed-but-SAP-failed write is routed to a dead-letter queue for manual resolution, not silently discarded.
  • EU AI Act Art. 14 (human oversight) compliance is structural: the approval record is the evidence artifact for regulatory inspection.
  • Adds one round-trip latency to the correction workflow. Acceptable given the batch-processing nature of month-end reconciliation.
Status
Accepted. Implemented in the IC agent's SAP connector module. Gate enforced at the connector layer, not the agent layer — agent cannot self-issue an approval record ID.
ADR-AF-02  ·  IC Anomaly Model Selection  ·  Authority: Principal Architect  ·  Supersedes: None

Isolation Forest + XGBoost Ensemble
for IC Anomaly Detection

✓ Accepted
Ensemble Architecture — IC Anomaly Detection
Ensemble Architecture — IC Anomaly Detection IC FEATURES entity_pair · amount historical_delta ISOLATION FOREST Unsupervised anomaly score No labelled data needed XGBOOST CLASSIFIER Supervised on HITL labels SHAP-explainable WEIGHTED COMBINER Score fusion · threshold 0.72 escalation gate SHAP EXPLAINER TreeExplainer · deterministic XGBoost only · Art. 13 HITL ESCALATION Score + explanation sent to controller UI AUDIT LOG Inputs · score · SHAP BigQuery · immutable
Context
The intercompany anomaly detection model must satisfy two constraints simultaneously: it must produce explainable outputs at inference time (EU AI Act Article 13 — transparency for high-risk AI in financial services) and it must perform well on the entity-pair mismatch detection task where labelled training data is sparse. Pure deep learning approaches satisfy neither constraint without significant additional engineering overhead.
Decision

Deploy an ensemble of Isolation Forest (unsupervised anomaly scoring — no labelled data required at initialisation) and XGBoost classifier (supervised refinement trained on HITL override labels as they accumulate). Scores are fused via a weighted combiner. Any fused score above the 0.72 threshold triggers SHAP TreeExplainer — deterministic, not stochastic — which is applied to the XGBoost component only. Isolation Forest anomaly scores are surfaced as raw input features to the XGBoost layer, not independently explained via SHAP. The XGBoost SHAP output, which encodes the contribution of all features including the Isolation Forest score, constitutes the explanation payload stored in the HITL audit trail and presented to the controller. This scoping ensures SHAP determinism is preserved and that the Art. 13 explanation reflects the decision-making component, not the unsupervised scoring layer where SHAP stability guarantees do not hold.

Alternatives Considered
Pure deep learning (LSTM autoencoder): Rejected primarily on EU AI Act Article 13 grounds. LSTM autoencoders produce reconstruction error as the anomaly score — explaining which input features drove the error requires gradient-based attribution methods (Integrated Gradients, SHAP DeepExplainer) that are non-deterministic across runs on GPU hardware. A regulator asking for the same explanation twice would receive a different answer. For IC corrections that modify the legal entity ledger, explanation determinism is a hard requirement.
Single XGBoost classifier (no Isolation Forest component): Rejected because at system initialisation, labelled IC anomaly data does not exist — HITL labels accumulate only after the system is live. A pure supervised model cannot be deployed without a cold-start solution. Isolation Forest handles the cold-start period without labels; XGBoost weight increases as HITL labels accumulate. The ensemble structure allows graceful degradation to unsupervised operation if the supervised component underperforms.
Statistical process control (Z-score over rolling baseline): Rejected because IC mismatch patterns are not normally distributed. Entity pairs with seasonal intercompany volume (quarter-end sweeps, annual recharges) produce legitimate variance that triggers Z-score alerts. The SPC approach has no mechanism to condition on entity-pair history, agreement register, or cross-entity FX timing — all of which the XGBoost feature set encodes. False positive rate in back-testing was 3× higher than the ensemble.
Consequences
  • SHAP TreeExplainer determinism satisfies EU AI Act Article 13 — SHAP is applied to the XGBoost component only. Isolation Forest scores are input features to XGBoost, not independently explained. The same input produces the same explanation on every run, independent of hardware.
  • Isolation Forest component provides production-ready anomaly detection from day one without waiting for labelled data.
  • XGBoost component improves over time as HITL override labels are fed back through the retraining pipeline.
  • Ensemble fusion logic must be maintained — weight calibration reviewed quarterly or when override rate exceeds the drift threshold.
  • SHAP computation adds ~40ms per inference above the 0.72 threshold. Acceptable for IC reconciliation which is not a real-time workload.
Status
Accepted. Ensemble in production. SHAP explanations stored in the HITL audit trail. Weight calibration scheduled for Q2 2026 after the first full quarter of HITL label accumulation.
ADR-AF-03  ·  Cash Forecasting Model Selection  ·  Authority: Principal Architect  ·  Supersedes: None

LightGBM + Prophet Ensemble
for Cash Forecasting

✓ Accepted
Ensemble Forecast Architecture — 13-Week Rolling Cash Horizon
Ensemble Forecast Architecture — 13-Week Rolling Cash Horizon CALENDAR SIGNALS Payroll · quarter-end Public holidays · VAT STRUCTURAL FEATURES FX rates · IC sweeps AP aging · TMS balance PROPHET Explicit seasonality model Calendar regressors built-in LIGHTGBM Structural feature model FX · IC · AP signal capture STACKING BLENDER Meta-learner on holdout MAPE-weighted blend FORECAST OUTPUT 13-week · P10/P50/P90 P10 · P50 · P90 bands Per entity + consolidated SHAP ATTRIBUTION Feature contribution per forecast horizon TREASURY DASHBOARD CFO / Treasurer view Alert on breach of liquidity covenant
Context
Cash forecasting for a mid-market European group involves two structurally different signal types: calendar-driven seasonality (payroll runs on fixed dates, VAT settlement on quarter-end, public holidays that delay banking) and structural features that require cross-referencing (FX rates, intercompany sweep history, AP aging, credit line utilisation). A single model architecture optimised for one signal type performs poorly on the other. Additionally, the training dataset spans 36 months of entity-level cash flow — insufficient for LSTMs in financial time series, which require substantially longer histories to learn long-range seasonal dependencies without overfitting.
Decision

Deploy a Prophet + LightGBM stacking ensemble. Prophet handles calendar seasonality explicitly through its native regressor interface — payroll dates, quarter-end, public holiday calendars per jurisdiction are registered as regressors, not inferred from the time series. LightGBM handles structural features: FX rates, IC sweep history, AP aging, credit line data. A meta-learner trained on holdout data determines the MAPE-weighted blend ratio per forecast horizon. Output is a 13-week rolling forecast with P10/P50/P90 uncertainty bands per entity and consolidated.

Alternatives Considered
LSTM sequence model: Rejected on training data size. LSTMs for financial time series require substantially more than 36 months of entity-level data to generalise across the full seasonal cycle without memorising noise. Veldtmann's entity-level cash flow history is 36 months per entity at monthly granularity — insufficient at the daily granularity required for a 13-week rolling forecast. Additionally, LSTM hyperparameter sensitivity (sequence length, hidden units, dropout) requires a tuning infrastructure disproportionate to the dataset size. Prophet's additive model converges reliably on datasets of this size.
Pure Prophet (no LightGBM component): Rejected because Prophet's regressor interface cannot encode non-linear interactions between structural features. The relationship between FX rate movements and IC settlement timing is non-linear and conditional on entity-pair agreement structure — a pattern LightGBM captures naturally but Prophet's additive decomposition cannot represent without significant feature engineering that would replicate LightGBM's function.
Pure LightGBM (no Prophet component): Rejected because LightGBM requires calendar seasonality to be encoded as hand-crafted features (day-of-month, is-quarter-end flags, jurisdiction-specific holiday dummies). This feature engineering must be maintained as calendar rules change — payroll schedule changes, new public holidays, VAT payment date changes. Prophet's regressor interface accepts calendar objects directly, decoupling the model from feature engineering maintenance.
Consequences
  • LightGBM's quantile regression mode provides P10/P50/P90 forecast bands per entity and consolidated — the Treasury dashboard surfaces uncertainty range, not a point forecast, to avoid misleading CFO/Treasurer users.
  • Prophet component handles calendar seasonality without bespoke feature engineering — calendar updates propagate through the regressor interface.
  • LightGBM component captures non-linear structural feature interactions that Prophet's additive model cannot represent.
  • Stacking blend requires a holdout evaluation set — 6 months held out from the 36-month history for meta-learner training.
  • SHAP attribution via LightGBM's native TreeExplainer provides per-feature contribution at each forecast horizon — satisfies Art. 13 transparency for treasury decisions.
  • Two model retraining schedules must be managed independently: Prophet quarterly (calendar drift), LightGBM monthly (structural signal drift).
Status
Accepted. Ensemble deployed. Initial blend ratio: Prophet 0.55 / LightGBM 0.45, calibrated on holdout. To be recalibrated after six months of production MAPE data.
ADR-AF-04  ·  Agent Service Topology  ·  Authority: Principal Architect  ·  Supersedes: None

Three Separate Cloud Run Services
over a Single Monolithic Agent

✓ Accepted
Agent Service Topology — Independent Scaling & Failure Domains
Agent Service Topology — Independent Scaling and Failure Domains GCP PUB/SUB Event ingress per domain topic AP EXCEPTION AGENT Cloud Run · min=0, max=80 10× volume peak IC RECON AGENT Cloud Run · min=0, max=8 Month-end burst only TREASURY AGENT Cloud Run · min=1, max=4 HITL SERVICE Independent — shared SAP CONNECTOR Independent — shared AUDIT LOG BigQuery · immutable FAILURE DOMAIN BOUNDARY AP failure does not affect IC or Treasury Independent SLA per service Independent deploy cadence Volume ratio: AP = 10× IC
Context
Three agents handle distinct finance domains: AP exception processing (high volume, invoice-driven, 10× the event rate of IC), intercompany reconciliation (low volume, month-end burst), and treasury visibility (continuous, low latency). Their scaling requirements, failure domains, HITL SLA commitments, and deployment cadences are structurally different. A single service cannot satisfy all three sets of requirements without either over-provisioning (expensive) or creating a single failure domain that takes down all three agents simultaneously.
Decision

Deploy three separate Cloud Run services — one per agent domain — each with independent autoscaling configuration, independent HITL SLA, independent deployment pipeline, and independent failure boundary. Services share the HITL service, SAP connector, and audit log as separately deployed infrastructure. Pub/Sub topics route events to the correct agent service. No cross-service dependencies at the application layer.

Alternatives Considered
Single LangGraph graph with domain subgraphs: Rejected because a failure in one subgraph propagates to the graph executor. LangGraph's graph runtime is a single process — an unhandled exception in the AP subgraph kills the runtime serving IC and Treasury concurrently. During month-end, this means a high-volume AP exception (the most likely failure point) can take down the IC reconciliation agent at the moment it is most needed. Failure isolation requires process isolation, not code modularity.
Single Cloud Run service with internal routing: A single service with route handlers per agent domain. Rejected because autoscaling is per-service in Cloud Run. A spike in AP invoice volume scales up instances that also handle IC and Treasury workloads, incurring unnecessary cost. Conversely, IC month-end bursts compete for instances with steady-state AP volume. Independent scaling requires independent services.
GKE microservices: Kubernetes-based deployment per agent. Rejected at this scale — three agents with shared infrastructure do not justify the operational overhead of a Kubernetes cluster (control plane cost, node pool management, network policy maintenance). Cloud Run provides equivalent isolation with substantially lower operational surface area. GKE is reconsidered if agent count exceeds eight or if the mesh routing pattern requires service-mesh capabilities unavailable in Cloud Run.
Consequences
  • AP agent scales to max=80 instances during invoice volume peaks without affecting IC or Treasury service capacity.
  • A failure in any single agent service does not affect the other two — failure domain isolation is structural.
  • Three independent deployment pipelines enable zero-downtime updates to any agent without coordinating with others.
  • Three independent HITL SLAs can be negotiated with finance operations — AP may tolerate higher latency than Treasury.
  • Shared infrastructure (HITL service, SAP connector, audit log) must be deployed and maintained independently — added operational surface, offset by the isolation benefit.
Status
Accepted. Three Cloud Run services deployed. AP: min=0, max=80. IC: min=0, max=8. Treasury: min=1, max=4. Pub/Sub routing configured with separate topics per domain.
ADR-AF-05  ·  PIPEDA Compliance Pattern  ·  Authority: Principal Architect  ·  Supersedes: None

PIPEDA Compliance via Dataset Separation
over Field-Level Tagging

⬡ Under Review
PIPEDA Data Residency Architecture — BigQuery Dataset Separation
PIPEDA Data Residency Architecture — BigQuery Dataset Separation AGENT LAYER Entity routing via jurisdiction tag BQ: af_eu_data EU region · GDPR DE · NL · CH entities BQ: af_global_data Non-personal operational data — all entities BQ: af_ca_pipeda CA northamerica1 PIPEDA · RBC Canada DATASET IAM POLICY af_ca_pipeda: purpose = financial_ops_CA residency = northamerica1 access = ca_finance_sa only Auditable without querying logs PIPEDA AUDITOR VIEW Verifies residency via dataset location metadata Verifies purpose limitation via IAM policy — not logs CROSS-DATASET JOIN af_ca_pipeda ↔ af_global_data BLOCKED by IAM at dataset boundary
Context
Veldtmann GmbH operates a Canadian legal entity (Veldtmann Canada) which generates financial data subject to PIPEDA — Canada's federal private sector privacy legislation. PIPEDA requires demonstrable purpose limitation and data residency controls. The Autonomous Finance processes Canadian entity cash flows, AP invoices, and IC positions within the same infrastructure used for EU entities. A compliance approach must allow a PIPEDA auditor to verify that Canadian personal data is handled appropriately without requiring access to query logs, which may contain sensitive operational data.
Decision

Separate Canadian entity data into a dedicated BigQuery dataset (af_ca_pipeda) deployed in the northamerica1 multi-region. Dataset-level IAM policy restricts access to the Canadian-entity service account only and encodes purpose limitation as a dataset label. A PIPEDA auditor can verify data residency from the dataset's location metadata and purpose limitation from the IAM policy — neither requires inspecting query logs. Cross-dataset joins between af_ca_pipeda and other datasets are blocked by IAM boundary — enforced structurally, not by application logic.

Alternatives Considered
Field-level tagging with Data Catalog: Tag each field containing Canadian personal data in the shared dataset using BigQuery Data Catalog sensitivity tags. Rejected because field-level tagging is a labelling mechanism, not an access control mechanism. Verifying PIPEDA compliance requires inspecting Data Catalog tag assignments per field across all tables — an audit process that requires BigQuery API access and catalogue query execution. The dataset separation approach allows audit from the console: the dataset exists, its location is northamerica1, and its IAM policy shows the purpose and access restrictions.
Application-layer filtering: Store all entity data in a shared dataset; filter Canadian data at the application layer by entity code before any processing. Rejected because application-layer filtering does not constitute data residency — the data physically resides in the EU-region dataset regardless of which entity code the query selects. PIPEDA auditors evaluate residency at the storage layer, not the query layer. Application-layer filtering also creates a risk of filter bypass in edge cases (bulk analytics, ML training pipelines).
Separate GCP project for Canadian entity: Deploy the entire infrastructure stack in a Canadian GCP project. Rejected on cost and operational grounds — duplicating Pub/Sub, Cloud Run services, monitoring, and logging infrastructure for a single legal entity that represents approximately 8% of total entity volume is not proportionate. Dataset separation within the same project achieves the required residency and access boundary at a fraction of the operational cost.
Consequences
  • PIPEDA audit can be completed from the BigQuery console without querying production logs — data residency and purpose limitation are verifiable from metadata.
  • IAM boundary prevents cross-dataset joins — consolidated reporting that spans Canadian and EU entities must be implemented via aggregated views in the global dataset, not raw joins.
  • Canadian entity ML training must use only the af_ca_pipeda dataset — training on the global dataset with Canadian data excluded via WHERE clause does not satisfy the structural separation requirement.
  • Dataset provisioning script must be version-controlled and immutable — IAM policy changes require a PR review cycle to maintain audit trail.
  • Under review pending legal confirmation that dataset-level separation satisfies the OPC's current interpretation of PIPEDA data residency requirements.
Status
Under Review. Implementation complete. Under review pending written confirmation from Veldtmann's Canadian privacy counsel that dataset-level BigQuery separation satisfies the Office of the Privacy Commissioner's current guidance on PIPEDA data residency. Expected resolution Q2 2026.
ADR-AF-06  ·  Bank Connectivity Standard  ·  Authority: Principal Architect  ·  Supersedes: None

ISO 20022 as the Single Standard
for Bank-to-Corporate Connectivity

✓ Accepted
ISO 20022 Integration Architecture — Four Banking Relationships via Single Adapter Pattern
ISO 20022 Integration Architecture — Four Banking Relationships via Single Adapter TREASURY AGENT Cash position request Payment instruction ISO 20022 ADAPTER Single integration pattern camt.052 · camt.053 pain.001 · pacs.008 Message validation XML schema enforced DEUTSCHE BANK ISO 20022 API · EU ✓ Supported post-2025 ING ISO 20022 API · EU/NL ✓ Supported post-2025 UBS ISO 20022 API · CH ✓ Supported post-2025 RBC CANADA ISO 20022 API · CA ✓ Supported post-2025 NORMALISED POSITION STORE Per-entity bank balance Intraday + end-of-day Fed to cash forecast model BigQuery · partitioned TREASURY DASHBOARD Consolidated real-time view CFO / Treasurer
Context
The treasury visibility agent requires real-time cash position data and payment instruction capability across four banking relationships: Deutsche Bank (EU entities), ING (Netherlands and EU operations), UBS (Swiss entity), and RBC Canada (Canadian entity). Three connectivity patterns are available: screen-scraping of bank portals, proprietary bank API integrations, and ISO 20022 Open Banking. Each represents a different maintenance surface, regulatory position, and integration complexity.
Decision

Implement ISO 20022 Open Banking as the single bank connectivity standard. All four banking relationships support ISO 20022 post-2025 (the EU mandated migration deadline). A single adapter handles camt.052 (intraday position), camt.053 (end-of-day statement), pain.001 (payment initiation), and pacs.008 (credit transfer). One integration pattern covers all four banks. Message schema validation is enforced at the adapter layer before any data reaches the agent.

Alternatives Considered
Proprietary bank API integrations (four bespoke integrations): Each bank provides a proprietary REST API with different authentication schemes, data models, rate limits, and versioning cycles. Rejected because maintaining four independently versioned integrations — each with its own SDK, credential rotation schedule, and regression test suite — represents a disproportionate engineering maintenance burden relative to the connectivity function. Any change to a bank's API model requires a dedicated integration update cycle. ISO 20022 provides a stable, standards-based contract.
Screen-scraping via headless browser: Automate bank portal login and position data extraction via Playwright or equivalent. Rejected categorically — screen-scraping violates bank terms of service, is fragile to portal UI changes, cannot handle MFA requirements reliably in an unattended context, produces no structured data contract, and cannot be used for payment initiation. Not considered a viable integration pattern for production treasury infrastructure.
SWIFT gpi (global payments innovation): SWIFT's enhanced correspondent banking network with payment tracking. Rejected as the primary connectivity pattern because SWIFT gpi is oriented toward cross-border payment tracking, not real-time intraday balance reporting. It also requires SWIFT membership or a correspondent bank relationship, adding cost and onboarding time. ISO 20022 is the ECB-mandated standard for the TARGET2/T2 migration and covers the real-time balance and payment instruction use cases for EU bank-to-corporate connectivity directly.
Consequences
  • Single ISO 20022 adapter covers all four banking relationships — one integration pattern to maintain, test, and version.
  • ISO 20022 is the ECB-mandated standard for EU high-value and domestic payment messaging post-2025 (TARGET2/T2 migration) — regulatory longevity is assured for the EU banking relationships, unlike proprietary bank APIs subject to deprecation.
  • Adding a fifth banking relationship (if Veldtmann expands) requires bank onboarding only — the adapter code is unchanged if the new bank supports ISO 20022.
  • Message schema validation at the adapter layer provides a structural contract — malformed bank responses are rejected before reaching agent logic.
  • ISO 20022 message parsing requires XML schema validation — a modest additional compute cost per message accepted relative to JSON REST APIs.
  • RBC Canada's ISO 20022 implementation uses the Canadian Payments Association profile — minor message field differences from the EU profile require adapter configuration per banking relationship, not code changes.
Status
Accepted. ISO 20022 adapter deployed. Deutsche Bank and ING connected and validated in staging. UBS and RBC Canada onboarding in progress — expected production cut-over Q2 2026.

Regulatory encoding across all six ADRs: ADR-AF-01 encodes EU AI Act Art. 14 human oversight structurally via the SAP write gate. ADR-AF-02 encodes Art. 13 transparency via SHAP determinism (scoped to the XGBoost component; see ADR for scope note). ADR-AF-03 satisfies Art. 13 for treasury decisions via LightGBM SHAP attribution with P10/P50/P90 forecast bands. ADR-AF-04 enables independent HITL SLA management per agent domain — a prerequisite for Art. 14 compliance at scale. ADR-AF-05 (Under Review) — PIPEDA purpose limitation and data residency encoding is the intended outcome pending written confirmation from Canadian privacy counsel; regulatory status is not yet finalised. ADR-AF-06 ensures bank-to-corporate connectivity via ISO 20022, the mandated standard for EU high-value and domestic payment messaging post-2025 (ECB TARGET2/T2 migration). None of these decisions are retrofitted — each is a load-bearing constraint that shaped the architecture from first principles.