The document that writes
itself — explained,
reviewed, and owned
by its author.
An explainability-first, human-gated documentation pipeline for technical writers — designed for both feature-release authoring and document-driven development, from intake through to a Google Style Guide-compliant first draft. Five agentic stages. One human gate. Zero black-box decisions.
Pipeline Stages
Rules Enforced
Personas Served
Decisions Made
Three forces converged.
Simultaneously.
For an autonomous documentation pipeline to be architecturally viable, three conditions had to be true at once: LLM output quality had to reach professional-grade for technical prose, a widely-adopted style guide had to be machine-checkable with sufficient specificity, and agentic tooling had to be mature enough to support stateful multi-step workflows without bespoke orchestration infrastructure. In 2025, all three arrived.
LLMs reached professional-grade technical prose
Llama 3.1 405B and Mixtral 8x22B produce technical documentation that satisfies style guide requirements without extensive post-editing. This isn't incremental — it crosses the threshold from "assisted drafting" to "first-draft quality" for well-defined document types. The accuracy ceiling for hallucination-prone content (API endpoints, parameter types) is managed by grounding the Draft Agent in context the writer provides, not model memory.
The Google Developer Style Guide is machine-checkable
Unlike generic style guides, the Google Developer Style Guide for technical writers is specific enough to encode as deterministic rules. Active vs passive voice, second-person address, present-tense verbs, heading capitalisation, Latin abbreviation avoidance — these are binary checks, not aesthetic judgments. For the first time, "does this document comply?" is a question an LLM can answer with citations, not a question that requires a human editor.
Agentic pipelines are production-grade without infra overhead
LangGraph's stateful agent graph pattern, adapted to JavaScript, enables a multi-agent pipeline that runs entirely in a browser — no orchestration server, no managed infrastructure, no vendor lock-in. Each agent has defined inputs, outputs, and a state machine. The pipeline is inspectable, testable, and reproducible from a static GitHub Pages deploy. This is architecturally significant: agentic maturity no longer requires a cloud backend.
Client-side AI is fast enough to be useful
Groq's inference API delivers Llama 3.1 responses at 300+ tokens/second. A full five-agent pipeline — intake, research clarification, draft, compliance check, review prep — completes in under 45 seconds on a standard internet connection. The Autonomous Author's entire pipeline runs from the browser, calling Groq directly. No server round-trips. No data leaving the writer's session without their knowledge. Speed and privacy are now simultaneously achievable.
Two writers. Two broken
workflows.
Technical writing dysfunction manifests differently depending on whether the writer is downstream or upstream of code. Both are real, both are costly, and existing tools address neither with architectural rigour.
Context starvation
The writer is last to know. Ticket arrives, SMEs are in standups. The first 2–3 days of every doc sprint are spent chasing context that should have been packaged with the feature work.
Compliance as afterthought
Style guide checks happen informally in review, if at all. Violations accumulate. Published docs carry passive voice, Latin abbreviations, inconsistent second-person address — all detectable, all preventable.
DDD ambiguity ships as code
A spec that says "the system should respond quickly" causes a developer to make a judgment call. That judgment call becomes a bug. The DDD author has no tool that treats vagueness as a defect before the spec leaves their hands.
Four principles. The design
cannot break them.
These are not best-practice guidelines. They are architectural constraints that every component of The Autonomous Author must satisfy. Any design decision that violates a principle requires a documented rebuttal explaining the trade-off and the alternative considered.
The Autonomous Author drafts, checks, and structures. The writer authors. Every agent in the pipeline produces material for the writer's review, not for direct publication. The human gate — review and approval — is not optional, not bypassable, not a checkbox. It is the architectural contract between the pipeline and the writer. The tool makes the writer faster, better-informed, and more consistent. It does not make decisions on their behalf.
Human Gate — Non-negotiable · XAI LayerEvery agent in the pipeline surfaces a reasoning card before passing control to the next stage. The card states what the agent understood, what it decided, why it decided it, and what it was uncertain about. Compliance violations are cited against named rules. Draft decisions are traceable to context the writer provided. Confidence scores accompany every output. The writer should be able to audit the pipeline's reasoning without running it again.
XAI — Explainability First · Reasoning CardsStyle guide compliance is enforced during the pipeline, before the writer sees the draft. The Compliance Agent runs against every draft before review prep begins — not as a linting step the writer can skip, but as a mandatory pipeline stage that annotates violations, cites the rule number, and suggests a fix. The writer receives a document that has already been checked, annotated, and is ready for their decisions on each flagged item. Compliance is structural, not optional.
Google Developer Style Guide · 80-Rule EnforcerThe Autonomous Author does not require the writer to adopt a new system, learn a new process, or restructure their existing workflow. Intake accepts whatever the writer currently works from — a Jira ticket, a PR description, a Notion brief, free text. Output is Markdown, HTML, or clipboard-ready text. The pipeline wraps around existing motion; it does not redirect it. An individual writer at any enterprise, using any documentation system, can run the pipeline from a browser tab and return to their normal tools with a compliant first draft.
Workflow-Neutral · Client-Side · Zero InstallThree layers. One
coherent contract.
Each layer has a single responsibility and a clean interface to the layers above and below it. XAI reasoning cards and compliance reports flow upward from Layer 2 to Layer 1. The writer's input and decisions flow downward from Layer 1 into Layer 2. Security and persistence constraints are enforced at Layer 3 — they cannot be overridden by the application layer above them.
This is a concept overview. The full technical architecture — client-side stack, agent topology, state management, Groq API integration, and GitHub Pages deployment — is developed in Page 03 (Design) and Page 06 (Infrastructure).
Presentation & Experience
Intake form, live pipeline monitor, review UI with diff view, compliance report, export panel. The only thing the writer sees. Persona selection (P1/P2) and Agile/Waterfall mode live here.
Agent Pipeline
Five sequential agents — Intake, Research, Draft, Compliance, Review Prep — plus the DDD-exclusive Ambiguity Detector. Each agent has a defined input contract, output schema, and XAI reasoning card. Full detail on Page 04.
Client-Side Infrastructure
GitHub Pages hosting, Groq API (writer-provided key, localStorage), IndexedDB for session persistence, compliance rule JSON as a versioned static asset. Zero server. No data exfiltration beyond the writer's own API calls. Full detail on Page 06.