The Autonomous Author / Page 02 — Client Brief & Requirements

Maya Chen
Senior Technical Writer
Client Brief & Requirements

The requirements layer of the Autonomous Author. Every architectural decision in the pipeline traces back to a documented requirement on this page. This is the single source of truth for what The Autonomous Author is designed to achieve for Maya.

Anchor Client — Individual MoSCoW Prioritisation 2 Personas 3 Stakeholders 6 Use Cases Agile + Waterfall ADR-001 · ADR-002 referenced
Positioning

The Autonomous Author alongside
existing tools — the boundary.

Maya already uses Grammarly, Confluence templates, and occasionally ChatGPT for first-draft assistance. The first question any technically literate writer asks is: why build a five-agent pipeline when those tools exist? The answer is architectural, not competitive.

Canonical Positioning Statement

"Grammarly corrects grammar within the sentence. ChatGPT generates prose without context or compliance. Confluence templates structure without intelligence. The Autonomous Author orchestrates the full DDLC — from raw feature signal through to a Google Style Guide-compliant, ambiguity-checked, human-reviewed first draft — with every decision explained and every agent step visible. It is not a better autocomplete. It is a documented, explainable, persona-aware documentation pipeline."

ADR-002 · The Autonomous Author alongside existing tools, not instead of them
FIVE DOMAINS — WHERE EXISTING TOOLS END AND THE AUTONOMOUS AUTHOR BEGINS DOMAIN GRAMMARLY / CHATGPT THE AUTONOMOUS AUTHOR DDLC lifecycle awareness Knows intake → draft → compliance → review Single-turn only No stateful pipeline Full pipeline — 5 agents Stateful · session-persistent Google Style Guide compliance Named rules, cited violations, mandatory gate Grammar only / generic No rule citation 80-rule enforcer Named rule · fix suggestion DDD ambiguity detection Flags vague terms before spec leaves the writer Does not exist No tool addresses this Ambiguity Detector P2 exclusive · DDD second pass XAI — explainable reasoning Every suggestion has a visible, citable why Black-box suggestions No reasoning surfaced Reasoning card per agent Confidence score · cited rule Zero-server privacy model Doc content never touches a third-party server Data processed on vendor servers Enterprise data risk Client-side only Writer's API key · writer's browser Autonomous Author handles Partial / no capability Does not exist in market
Diagram 04 Five-domain positioning — Autonomous Author vs existing tools. ADR-002 governs this boundary.
Org Profile

Maya Chen — the anchor client.

A composite of real individual technical writer patterns inside mid-size SaaS organisations. Every requirement, constraint, and architectural decision on this page is grounded in Maya's specific operational context as a solo writer serving two distinct documentation modes simultaneously.

Full name
Maya Chen
Senior Technical Writer — 4 years at Orbis
Employer
Orbis Cloud
B2B SaaS · Developer Platform · ~400 employees · Series C
Primary domain
Developer Platform Docs
Public REST API · Internal SDK · CLI reference
Secondary domain
DDD System Specs
Backend team · Upstream of code · Waterfall cadence
Sprint cadence
2-week sprints
~3 feature docs per sprint · avg 1,400 words each
Review cycle
2.4 rounds avg
3–5 days per round · 1 engineering SME · 1 PM
Tool stack
Confluence Jira GitHub VS Code Markdown Grammarly Notion (informal) Slack Figma (read-only)
Green = daily-use tools. The Autonomous Author must integrate with or output to all green-marked tools.
2.8
days avg
context wait
43%
docs published
with style violations
0
DDD specs with
ambiguity check
11
days avg
doc-to-publish
Stakeholder Register

Three stakeholders.
Three different conversations.

The Autonomous Author is deliberately an individual tool — not a collaboration platform. The stakeholder register is intentionally lean. An architecture that satisfied Maya's lead engineer's review concerns but ignored Maya's own authorship integrity would not be adopted. Each stakeholder below has a primary concern, a specific question the tool must answer, and a set of pipeline components that address their domain.

Maya Chen Senior Technical Writer S-01 · Primary User S-02 Priya Nair Engineering Lead Concern: DDD spec accuracy & ambiguity before build → Ambiguity Detector · P2 mode S-03 James Okafor Product Manager Concern: Feature doc accuracy & release timing → Intake Agent · P1 mode Question the tool must answer for Maya: "Give me a compliant, context-grounded first draft in under 15 minutes — with every AI decision explained, so I can own the final document." Primary user (Maya) Contributing stakeholder
Diagram 05 Stakeholder influence map — Maya at centre, two contributing stakeholders with concern domains and pipeline components mapped.
S-01 · Primary User

Maya Chen

Senior Technical Writer — Orbis Cloud

Every doc she produces must be accurate, compliant, and owned by her as the author. Her concern is efficiency without losing authorial integrity — she wants AI to do the research and first draft, not to publish without her review.

Question the tool must answer: "Show me exactly what the AI decided, why it decided it, and where I need to review before this goes anywhere."
All agentsXAI layerReview UI
S-02 · Engineering SME

Priya Nair

Engineering Lead — Orbis Backend Platform

DDD specs must be unambiguous before Priya's team writes a line of code. One vague sentence in a spec costs a sprint. She reviews Maya's specs as the final technical gate before build begins.

Question the tool must answer: "Has this spec been checked for undefined terms, vague quantifiers, and missing error states — before it reached me?"
Ambiguity DetectorP2 mode
S-03 · Feature Intent

James Okafor

Product Manager — Orbis Developer Platform

Feature docs must accurately represent intent and ship with the release. James is the source of feature tickets and the final approver of what the release doc says. He needs Maya to have fast, accurate context without his constant availability.

Question the tool must answer: "Can Maya get a solid first draft from the ticket I wrote — without needing a 30-minute sync with me every time?"
Intake AgentResearch AgentP1 mode
As-Is Architecture

Maya's current DDLC —
mapped precisely.

Understanding where latency accumulates and steps get skipped is the prerequisite for designing the To-Be pipeline. The two diagrams below represent Maya's actual workflow today — one for each persona. Pain points are documented as architectural deficits, not as complaints.

PERSONA 1 — FEATURE RELEASE (AGILE) · CURRENT STATE Jira ticket or PR arrives Day 0 Context gather Slack / async Day 0–3 ⚠ 2.8 day avg wait · SME unavailability First draft manual Day 4–5 SME review round 1 Day 6–9 ⚠ 2.4 rounds avg Compliance skipped ✗ Day 9 Published 43% with violations Day 10–11 P1 PAIN SUMMARY — 2.8 day context wait · 2.4 review rounds · compliance step absent · 43% violation rate on publish End-to-end: 11 days avg · No single step has AI assistance · Every step is manual · No audit trail PERSONA 2 — DDD SPEC (WATERFALL) · CURRENT STATE Feature intent PM brief Day 0 Spec draft manual · no template Day 1–4 ⚠ Ambiguity check — ABSENT ✗ Day 4 Vague terms ship as code Engineering review gate Day 5–8 ambiguity found in review Compliance check — ABSENT ✗ Day 8 Approved spec defects included Day 9–12 P2 PAIN SUMMARY — No ambiguity detection · No spec template · Vague terms discovered in engineering review · Rework costs sprints Ambiguity found in review = 1 sprint regression · 0 DDD specs currently have automated ambiguity check · Compliance absent Latency / failure point Absent step Neutral step
Diagram 06 Detailed As-Is DDLC — both personas. Latency, feedback loops, and absent steps mapped as architectural deficits.
Requirements Catalogue

What Maya requires —
documented and prioritised.

Every requirement is traceable to a stakeholder concern, a workflow pain point, or a design constraint identified in the preceding sections. Prioritised using MoSCoW. Must Have requirements are architectural constraints on the pipeline design — any component that fails to satisfy them is not acceptable regardless of other merits.

6
Business
Requirements
8
Architecture
Requirements
5
Constraints
IDRequirementDescriptionMoSCoWComponent
BR-01First draft in ≤15 minFrom ticket paste to writer-ready draft, the full pipeline must complete in under 15 minutes on a standard connection with Groq free tier. The current as-is time-to-first-draft is 4–5 days. The tool's primary value proposition is this compression.MustAll agents · Groq API
BR-02Google Style Guide compliance before reviewEvery draft presented to Maya must have been checked against the 80-rule compliance set before she sees it. Violations are annotated inline with rule name, excerpt, and fix suggestion. The compliance gate is not optional and not bypassable.MustCompliance Agent · Rule JSON
BR-03DDD ambiguity detection before spec deliveryIn P2 mode, every spec draft must pass through the Ambiguity Detector before Review Prep. Vague quantifiers, undefined terms, missing error states, and implicit assumptions are flagged as distinct violation types. The detector fires before Priya's team sees the document.MustAmbiguity Detector · P2 mode
BR-04Persona-aware pipeline (P1 / P2)The pipeline must behave differently for feature docs vs DDD specs. P1 produces a release-ready feature doc structure. P2 produces an imperative-voice spec with requirements traceability. The writer selects persona at session start. The Draft Agent's behaviour changes accordingly.MustDraft Agent · Persona selector
BR-05Agile and Waterfall workflow modesAgile mode is delta-aware: the writer can indicate this is an update to an existing doc, and the pipeline produces a diff-annotated patch rather than a full document. Waterfall mode treats every session as a new formal artifact with version metadata.ShouldIntake Agent · Session config
BR-06Export to Maya's existing toolsOutput must be exportable as Markdown (for GitHub / Confluence), clean HTML, and clipboard-ready plain text. The writer does not change her publishing workflow to use this tool. The tool outputs to her world.MustExport Panel · Review UI
IDRequirementDescriptionMoSCoWComponent
AR-01XAI reasoning card per agentEvery agent in the pipeline must produce a structured reasoning card before passing control to the next stage. The card states: what the agent understood, what it decided, why, confidence score, and uncertainties. Cards are visible in the pipeline monitor in real time.MustXAI Layer · All agents
AR-02Human gate is non-bypassableNo agent output reaches publication without Maya's explicit review and approval. The Review Prep Agent assembles the final view; Maya must interact with it before export is enabled. The gate is enforced in the UI state machine — export is disabled until review is complete.MustReview UI · Human gate
AR-03Client-side only — no backendThe entire pipeline runs in the browser. No server receives Maya's document content except Groq's API (during active session only, with writer-provided key). No Autonomous Author backend stores, logs, or processes document content. This is the enterprise data safety guarantee.MustArchitecture constraint · ADR-001
AR-04Compliance rules as versioned JSON assetThe 80-rule compliance set must be a static, versioned JSON file loaded at pipeline init. Rules are never LLM memory — they are structured data. Every violation cites a rule ID from this file. The file is version-controlled alongside the codebase.MustCompliance Agent · rules.json
AR-05Session state persisted in IndexedDBDocument sessions, agent logs, draft history, and compliance reports are persisted in the writer's browser IndexedDB. Sessions survive page refresh. History is available for the writer's reference. Data never leaves the browser except via explicit export.MustIndexedDB · Session manager
AR-06Single responsibility per agentEach agent has exactly one job, one input contract, and one output schema. No agent performs two pipeline functions. This constraint makes each stage independently testable, replaceable, and explainable. Violating it to reduce API calls is not acceptable — SR is a design principle.MustAgent design · ADR-003
AR-07Confidence scores on all agent outputsEvery agent output must include a confidence score (0.0–1.0) representing the agent's assessment of output quality given available context. Low-confidence outputs are visually flagged in the Review UI. The writer uses confidence scores to prioritise their review attention.ShouldXAI Layer · Review UI
AR-08Placeholder insertion for missing context (P2)In DDD mode, when the Draft Agent encounters a required field with insufficient context, it inserts a structured placeholder: [REQUIRES INPUT: reason] rather than inferring. Inferred content in a DDD spec is a defect. Explicit placeholders are actionable. This behaviour is enforced by the P2 system prompt.MustDraft Agent · P2 system prompt
IDConstraintDescription & Impact on DesignMoSCoWSource
C-01Zero infrastructure costThe Autonomous Author runs entirely on free-tier services. GitHub Pages for hosting, Groq free tier for inference, browser APIs for storage. No paid subscriptions, no managed databases, no cloud compute. The writer's Groq API key is the only dependency with a usage limit.MustPortfolio constraint
C-02Groq API as inference providerGroq is selected as the inference API (ADR-002). The writer provides their own API key. The pipeline is designed around Groq's request format and rate limits. If Groq changes its free tier, the architecture must accommodate a key swap to Together AI or equivalent — the abstraction layer must support this.MustADR-002 · C-01
C-03No change to Maya's publishing workflowThe tool must not require Maya to adopt a new CMS, a new doc platform, or a new review process. It outputs to her existing formats. It does not create accounts, does not manage publishing, does not integrate with Confluence's API. It produces text. Maya publishes it.MustPhilosophy P-IV
C-04Single writer — no collaboration featuresThe Autonomous Author is explicitly not a collaboration tool. There are no shared sessions, no multi-user review flows, no comment threads. Features designed for collaboration are out of scope. The tool augments one writer's individual workflow. This is a deliberate scope constraint, not a roadmap gap.MustScope constraint
C-05MVP-plus build standardEach pipeline stage is built to demonstrate one complete end-to-end flow — sufficient to run a live demo against a real Groq API key. Production hardening (offline mode, multi-browser sync, accessibility audit) is out of scope for the portfolio phase. Architecture is designed for production; implementation is scoped for demonstration.MustPortfolio scope
AI Readiness Assessment

Where Maya stands today —
five dimensions.

The AI Readiness Assessment defines Maya's starting position across five dimensions and frames the gap the pipeline is designed to close. Scored 1–5. Findings are actionable. Each dimension produces a specific Day 1 action.

Context packaging
2 / 5
Jira and GitHub are present but context is fragmented across Slack threads, Notion pages, and verbal SME input. No structured context handoff exists.
Action: Research Agent fills this gap by asking structured clarification questions before drafting begins.
Style guide adherence
2 / 5
Maya is aware of the Google Developer Style Guide but applies it inconsistently under sprint pressure. 43% of published docs have detectable violations. No automated check exists in her workflow.
Action: Compliance Agent — mandatory, pre-review, 80-rule JSON check.
DDD spec rigour
1 / 5
No DDD-specific tooling exists anywhere in Maya's workflow. Ambiguity is caught in engineering review — at the worst possible time. Zero specs currently have automated ambiguity detection.
Action: Ambiguity Detector is the highest-value addition for P2 — no competing tool addresses this domain.
AI tool familiarity
3 / 5
Maya uses ChatGPT for occasional first drafts but distrusts outputs she can't verify. She wants to see reasoning, not just results. XAI adoption curve is shorter than for non-AI-familiar writers.
Action: XAI reasoning cards and confidence scores satisfy Maya's verification need directly.
Workflow change tolerance
4 / 5
Maya is open to new tools that don't disrupt her publishing workflow. She will not adopt a new CMS or review platform. She will adopt a browser-based tool that outputs to Markdown.
Action: Client-side, zero-install, Markdown export — no workflow disruption by design.
Overall readiness
2.4 / 5
Assessment: Maya is at the right readiness level for The Autonomous Author — experienced enough to leverage AI effectively, but not yet equipped with the right tooling. The gaps are structural, not motivational. Each dimension maps directly to a pipeline component. This is the ideal adoption profile.
Use Case Catalogue

Six use cases —
two personas, one pipeline.

UC-01 · Persona 1

Feature release doc from Jira ticket

James creates a Jira ticket for a new PATCH /users/{id} endpoint. Maya pastes it into the Autonomous Author. The pipeline extracts intent, identifies doc type, drafts a procedure + API reference, checks compliance, flags 3 violations, and delivers a review-ready draft. Time: under 12 minutes.

P1 · Agile · Intake + Research + Draft + Compliance + Review Prep
UC-02 · Persona 2

DDD spec from feature intent statement

Maya needs to spec a new rate-limiting subsystem for the platform API. She provides a two-sentence intent statement. The pipeline extracts actors, system boundary, preconditions. Draft Agent produces an imperative spec. Ambiguity Detector flags "respond quickly" and "reasonable limit" as undefined. Review-ready in under 18 minutes.

P2 · Waterfall · All agents + Ambiguity Detector
UC-03 · Persona 1

Agile delta update — existing doc

A previously documented endpoint gets a new optional query parameter. Maya selects Agile delta mode, pastes the PR diff. The pipeline produces a diff-annotated patch — only the changed sections, with unchanged sections preserved. Compliance check runs on the entire updated doc. Delta only.

P1 · Agile delta mode · Intake + Draft (patch) + Compliance
UC-04 · Persona 2

Unknown context — Research Agent gap detection

Maya's intent statement references "the Helix ingestion pipeline" — a system the LLM has no training data on. The Research Agent detects the unknown proper noun, flags it as a context gap, and asks Maya to provide a one-paragraph description before drafting begins. Draft Agent then uses that context. Placeholder inserted where data is still missing.

P2 · Context gap detection · Research Agent · AR-08
UC-05 · Persona 1

Compliance-only pass on existing doc

Maya has a manually-written doc that she suspects has style violations. She pastes it and runs compliance-only mode — skipping intake, research, and drafting. The Compliance Agent runs the full 80-rule check and returns an annotated report. Maya uses this to remediate an existing document without triggering a full pipeline run.

P1 · Compliance-only mode · Compliance Agent standalone
UC-06 · Persona 2

Waterfall spec with version metadata

Maya is producing v2.0 of the platform authentication spec. Waterfall mode attaches version metadata, a change log section scaffold, and a requirements traceability table to the output. Each requirement in the spec is tagged with an ID. Priya's engineering review can reference requirement IDs when raising issues.

P2 · Waterfall mode · Draft Agent + version metadata + traceability
Success Criteria

How Maya will know
the tool is working.

Observable, measurable outcomes — not subjective quality impressions. Each criterion is traceable to a specific requirement and verifiable from pipeline logs or Maya's own metrics. These are the acceptance criteria the Autonomous Author must satisfy before any component is considered production-ready.

Pipeline Speed
Full P1 pipeline completes in under 15 minutes from ticket paste to review-ready draft on a standard internet connection with Groq free tier
Full P2 pipeline including Ambiguity Detector completes in under 20 minutes from intent statement to annotated spec draft
Compliance-only mode completes in under 3 minutes for a 1,500-word document
Style Guide Compliance
Zero Google Developer Style Guide violations in any document that has passed through the Compliance Agent and been approved by Maya
Every compliance violation is annotated with a rule ID from rules.json — no violation appears without a named, citable rule
Compliance Agent detects >90% of violations present in a test document seeded with 20 known violations across 10 rule categories
DDD Spec Quality
Ambiguity Detector flags 100% of explicitly inserted vague quantifiers ("fast", "many", "appropriate", "reasonable") in P2 test documents
Ambiguity Detector flags all undefined terms on first use that have no preceding definition in the spec document
Draft Agent inserts [REQUIRES INPUT:] placeholders for every field with insufficient context rather than inferring content
XAI & Explainability
Every agent produces a reasoning card visible in the pipeline monitor — no agent completes without surfacing its reasoning
Confidence scores accompany every agent output — a writer reading the session log can identify which outputs require the most scrutiny without re-running the pipeline
The human gate remains enabled — export is disabled until Maya has explicitly reviewed and actioned the review UI checklist
ADR-001 · Client-Side Architecture Pattern

Selected over server-hosted SaaS, Hugging Face Spaces, and Render free tier. Client-side is the only pattern that satisfies C-01 (zero cost), AR-03 (no backend data store), and C-03 (no workflow change) simultaneously. A server-side approach would require authentication, a database, and a deployment pipeline — all of which add cost, complexity, and enterprise data risk.

Status: Accepted · Phase A · Alternatives considered: Render free tier, HuggingFace Spaces, Vercel Edge Functions
ADR-002 · Groq as Inference Provider

Selected over OpenAI (cost), Anthropic API (cost), Together AI (slower free tier), and local Ollama (machine dependency). Groq's free tier delivers 300+ tokens/second on Llama 3.1 70B — fast enough for a 5-agent pipeline to complete in under 15 minutes on a standard connection. The abstraction layer must support a swap to Together AI if Groq changes its free tier structure.

Status: Accepted · Phase A · Alternatives considered: OpenAI GPT-4o, Anthropic Claude Haiku, Together AI Mixtral, local Ollama