Maya Chen —
Senior Technical Writer
Client Brief & Requirements
The requirements layer of the Autonomous Author. Every architectural decision in the pipeline traces back to a documented requirement on this page. This is the single source of truth for what The Autonomous Author is designed to achieve for Maya.
The Autonomous Author alongside
existing tools — the boundary.
Maya already uses Grammarly, Confluence templates, and occasionally ChatGPT for first-draft assistance. The first question any technically literate writer asks is: why build a five-agent pipeline when those tools exist? The answer is architectural, not competitive.
"Grammarly corrects grammar within the sentence. ChatGPT generates prose without context or compliance. Confluence templates structure without intelligence. The Autonomous Author orchestrates the full DDLC — from raw feature signal through to a Google Style Guide-compliant, ambiguity-checked, human-reviewed first draft — with every decision explained and every agent step visible. It is not a better autocomplete. It is a documented, explainable, persona-aware documentation pipeline."
Maya Chen — the anchor client.
A composite of real individual technical writer patterns inside mid-size SaaS organisations. Every requirement, constraint, and architectural decision on this page is grounded in Maya's specific operational context as a solo writer serving two distinct documentation modes simultaneously.
context wait
with style violations
ambiguity check
doc-to-publish
Three stakeholders.
Three different conversations.
The Autonomous Author is deliberately an individual tool — not a collaboration platform. The stakeholder register is intentionally lean. An architecture that satisfied Maya's lead engineer's review concerns but ignored Maya's own authorship integrity would not be adopted. Each stakeholder below has a primary concern, a specific question the tool must answer, and a set of pipeline components that address their domain.
Maya Chen
Every doc she produces must be accurate, compliant, and owned by her as the author. Her concern is efficiency without losing authorial integrity — she wants AI to do the research and first draft, not to publish without her review.
Priya Nair
DDD specs must be unambiguous before Priya's team writes a line of code. One vague sentence in a spec costs a sprint. She reviews Maya's specs as the final technical gate before build begins.
James Okafor
Feature docs must accurately represent intent and ship with the release. James is the source of feature tickets and the final approver of what the release doc says. He needs Maya to have fast, accurate context without his constant availability.
Maya's current DDLC —
mapped precisely.
Understanding where latency accumulates and steps get skipped is the prerequisite for designing the To-Be pipeline. The two diagrams below represent Maya's actual workflow today — one for each persona. Pain points are documented as architectural deficits, not as complaints.
What Maya requires —
documented and prioritised.
Every requirement is traceable to a stakeholder concern, a workflow pain point, or a design constraint identified in the preceding sections. Prioritised using MoSCoW. Must Have requirements are architectural constraints on the pipeline design — any component that fails to satisfy them is not acceptable regardless of other merits.
Requirements
Requirements
| ID | Requirement | Description | MoSCoW | Component |
|---|---|---|---|---|
| BR-01 | First draft in ≤15 min | From ticket paste to writer-ready draft, the full pipeline must complete in under 15 minutes on a standard connection with Groq free tier. The current as-is time-to-first-draft is 4–5 days. The tool's primary value proposition is this compression. | Must | All agents · Groq API |
| BR-02 | Google Style Guide compliance before review | Every draft presented to Maya must have been checked against the 80-rule compliance set before she sees it. Violations are annotated inline with rule name, excerpt, and fix suggestion. The compliance gate is not optional and not bypassable. | Must | Compliance Agent · Rule JSON |
| BR-03 | DDD ambiguity detection before spec delivery | In P2 mode, every spec draft must pass through the Ambiguity Detector before Review Prep. Vague quantifiers, undefined terms, missing error states, and implicit assumptions are flagged as distinct violation types. The detector fires before Priya's team sees the document. | Must | Ambiguity Detector · P2 mode |
| BR-04 | Persona-aware pipeline (P1 / P2) | The pipeline must behave differently for feature docs vs DDD specs. P1 produces a release-ready feature doc structure. P2 produces an imperative-voice spec with requirements traceability. The writer selects persona at session start. The Draft Agent's behaviour changes accordingly. | Must | Draft Agent · Persona selector |
| BR-05 | Agile and Waterfall workflow modes | Agile mode is delta-aware: the writer can indicate this is an update to an existing doc, and the pipeline produces a diff-annotated patch rather than a full document. Waterfall mode treats every session as a new formal artifact with version metadata. | Should | Intake Agent · Session config |
| BR-06 | Export to Maya's existing tools | Output must be exportable as Markdown (for GitHub / Confluence), clean HTML, and clipboard-ready plain text. The writer does not change her publishing workflow to use this tool. The tool outputs to her world. | Must | Export Panel · Review UI |
| ID | Requirement | Description | MoSCoW | Component |
|---|---|---|---|---|
| AR-01 | XAI reasoning card per agent | Every agent in the pipeline must produce a structured reasoning card before passing control to the next stage. The card states: what the agent understood, what it decided, why, confidence score, and uncertainties. Cards are visible in the pipeline monitor in real time. | Must | XAI Layer · All agents |
| AR-02 | Human gate is non-bypassable | No agent output reaches publication without Maya's explicit review and approval. The Review Prep Agent assembles the final view; Maya must interact with it before export is enabled. The gate is enforced in the UI state machine — export is disabled until review is complete. | Must | Review UI · Human gate |
| AR-03 | Client-side only — no backend | The entire pipeline runs in the browser. No server receives Maya's document content except Groq's API (during active session only, with writer-provided key). No Autonomous Author backend stores, logs, or processes document content. This is the enterprise data safety guarantee. | Must | Architecture constraint · ADR-001 |
| AR-04 | Compliance rules as versioned JSON asset | The 80-rule compliance set must be a static, versioned JSON file loaded at pipeline init. Rules are never LLM memory — they are structured data. Every violation cites a rule ID from this file. The file is version-controlled alongside the codebase. | Must | Compliance Agent · rules.json |
| AR-05 | Session state persisted in IndexedDB | Document sessions, agent logs, draft history, and compliance reports are persisted in the writer's browser IndexedDB. Sessions survive page refresh. History is available for the writer's reference. Data never leaves the browser except via explicit export. | Must | IndexedDB · Session manager |
| AR-06 | Single responsibility per agent | Each agent has exactly one job, one input contract, and one output schema. No agent performs two pipeline functions. This constraint makes each stage independently testable, replaceable, and explainable. Violating it to reduce API calls is not acceptable — SR is a design principle. | Must | Agent design · ADR-003 |
| AR-07 | Confidence scores on all agent outputs | Every agent output must include a confidence score (0.0–1.0) representing the agent's assessment of output quality given available context. Low-confidence outputs are visually flagged in the Review UI. The writer uses confidence scores to prioritise their review attention. | Should | XAI Layer · Review UI |
| AR-08 | Placeholder insertion for missing context (P2) | In DDD mode, when the Draft Agent encounters a required field with insufficient context, it inserts a structured placeholder: [REQUIRES INPUT: reason] rather than inferring. Inferred content in a DDD spec is a defect. Explicit placeholders are actionable. This behaviour is enforced by the P2 system prompt. | Must | Draft Agent · P2 system prompt |
| ID | Constraint | Description & Impact on Design | MoSCoW | Source |
|---|---|---|---|---|
| C-01 | Zero infrastructure cost | The Autonomous Author runs entirely on free-tier services. GitHub Pages for hosting, Groq free tier for inference, browser APIs for storage. No paid subscriptions, no managed databases, no cloud compute. The writer's Groq API key is the only dependency with a usage limit. | Must | Portfolio constraint |
| C-02 | Groq API as inference provider | Groq is selected as the inference API (ADR-002). The writer provides their own API key. The pipeline is designed around Groq's request format and rate limits. If Groq changes its free tier, the architecture must accommodate a key swap to Together AI or equivalent — the abstraction layer must support this. | Must | ADR-002 · C-01 |
| C-03 | No change to Maya's publishing workflow | The tool must not require Maya to adopt a new CMS, a new doc platform, or a new review process. It outputs to her existing formats. It does not create accounts, does not manage publishing, does not integrate with Confluence's API. It produces text. Maya publishes it. | Must | Philosophy P-IV |
| C-04 | Single writer — no collaboration features | The Autonomous Author is explicitly not a collaboration tool. There are no shared sessions, no multi-user review flows, no comment threads. Features designed for collaboration are out of scope. The tool augments one writer's individual workflow. This is a deliberate scope constraint, not a roadmap gap. | Must | Scope constraint |
| C-05 | MVP-plus build standard | Each pipeline stage is built to demonstrate one complete end-to-end flow — sufficient to run a live demo against a real Groq API key. Production hardening (offline mode, multi-browser sync, accessibility audit) is out of scope for the portfolio phase. Architecture is designed for production; implementation is scoped for demonstration. | Must | Portfolio scope |
Where Maya stands today —
five dimensions.
The AI Readiness Assessment defines Maya's starting position across five dimensions and frames the gap the pipeline is designed to close. Scored 1–5. Findings are actionable. Each dimension produces a specific Day 1 action.
Action: Research Agent fills this gap by asking structured clarification questions before drafting begins.
Action: Compliance Agent — mandatory, pre-review, 80-rule JSON check.
Action: Ambiguity Detector is the highest-value addition for P2 — no competing tool addresses this domain.
Action: XAI reasoning cards and confidence scores satisfy Maya's verification need directly.
Action: Client-side, zero-install, Markdown export — no workflow disruption by design.
Six use cases —
two personas, one pipeline.
Feature release doc from Jira ticket
James creates a Jira ticket for a new PATCH /users/{id} endpoint. Maya pastes it into the Autonomous Author. The pipeline extracts intent, identifies doc type, drafts a procedure + API reference, checks compliance, flags 3 violations, and delivers a review-ready draft. Time: under 12 minutes.
P1 · Agile · Intake + Research + Draft + Compliance + Review PrepDDD spec from feature intent statement
Maya needs to spec a new rate-limiting subsystem for the platform API. She provides a two-sentence intent statement. The pipeline extracts actors, system boundary, preconditions. Draft Agent produces an imperative spec. Ambiguity Detector flags "respond quickly" and "reasonable limit" as undefined. Review-ready in under 18 minutes.
P2 · Waterfall · All agents + Ambiguity DetectorAgile delta update — existing doc
A previously documented endpoint gets a new optional query parameter. Maya selects Agile delta mode, pastes the PR diff. The pipeline produces a diff-annotated patch — only the changed sections, with unchanged sections preserved. Compliance check runs on the entire updated doc. Delta only.
P1 · Agile delta mode · Intake + Draft (patch) + ComplianceUnknown context — Research Agent gap detection
Maya's intent statement references "the Helix ingestion pipeline" — a system the LLM has no training data on. The Research Agent detects the unknown proper noun, flags it as a context gap, and asks Maya to provide a one-paragraph description before drafting begins. Draft Agent then uses that context. Placeholder inserted where data is still missing.
P2 · Context gap detection · Research Agent · AR-08Compliance-only pass on existing doc
Maya has a manually-written doc that she suspects has style violations. She pastes it and runs compliance-only mode — skipping intake, research, and drafting. The Compliance Agent runs the full 80-rule check and returns an annotated report. Maya uses this to remediate an existing document without triggering a full pipeline run.
P1 · Compliance-only mode · Compliance Agent standaloneWaterfall spec with version metadata
Maya is producing v2.0 of the platform authentication spec. Waterfall mode attaches version metadata, a change log section scaffold, and a requirements traceability table to the output. Each requirement in the spec is tagged with an ID. Priya's engineering review can reference requirement IDs when raising issues.
P2 · Waterfall mode · Draft Agent + version metadata + traceabilityHow Maya will know
the tool is working.
Observable, measurable outcomes — not subjective quality impressions. Each criterion is traceable to a specific requirement and verifiable from pipeline logs or Maya's own metrics. These are the acceptance criteria the Autonomous Author must satisfy before any component is considered production-ready.
Selected over server-hosted SaaS, Hugging Face Spaces, and Render free tier. Client-side is the only pattern that satisfies C-01 (zero cost), AR-03 (no backend data store), and C-03 (no workflow change) simultaneously. A server-side approach would require authentication, a database, and a deployment pipeline — all of which add cost, complexity, and enterprise data risk.
Selected over OpenAI (cost), Anthropic API (cost), Together AI (slower free tier), and local Ollama (machine dependency). Groq's free tier delivers 300+ tokens/second on Llama 3.1 70B — fast enough for a 5-agent pipeline to complete in under 15 minutes on a standard connection. The abstraction layer must support a swap to Together AI if Groq changes its free tier structure.