The content-pipeline Skill
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 2 · Section 2 of 5
The content-pipeline Skill
My content-pipeline skill in Claude Code orchestrates the whole process. It’s not magic — it’s a structured prompt that runs stages in order and hands the output of each stage to the next.
Here’s what it does:
Stage 1: Topic Selection
The pipeline starts with a topic, not a blank page. My topic backlog lives in 04 Domains/Signal Over Noise/SoN Topic Pipeline.md in Obsidian — a running list of ideas, news hooks, and things I want to explore. The first step is picking one and checking that I haven’t already covered it recently.
That check matters. Before the pipeline existed, I occasionally wrote an issue only to discover I’d covered the same ground three months ago. The pipeline runs a vault search first:
~/.bun/bin/qmd search "[topic]" --collection areas -n 5
If there’s recent coverage, the pipeline surfaces it. You either differentiate the new angle or pick a different topic.
Stage 2: Research
This is where the newsletter-researcher agent takes over. It’s a dedicated agent with web search access whose only job is building a research brief.
The brief it produces is not a list of links. It’s a structured document: the core argument or finding, three to five supporting sources with the relevant excerpts, data points worth citing, and opposing views worth addressing. This gives the newsletter-writer agent something to work with rather than a raw pile of URLs.
The quality of the brief determines the quality of the draft. An agent writing from a strong brief with clear sources produces drafts that are specific and grounded. An agent writing from weak inputs produces the vague, hedging prose that makes AI newsletters forgettable.
Stage 3: Outline
The research brief doesn’t automatically become a structure. I still decide what shape the issue takes: what’s the opening hook, what’s the main argument, which sources support which points, where does the practical advice sit.
This is a short step — fifteen to twenty minutes with the brief in front of me — but it’s not one I’ve fully automated. The structure of an issue reflects an editorial judgement about what will land with readers, and that judgement currently needs a human. You might reach a different conclusion after experimenting with your own setup.
Stage 4: Draft
The newsletter-writer agent is an Opus-model agent with detailed instructions about writing style, anti-slop patterns, voice characteristics, and structural requirements. It takes three inputs: the research brief, the outline, and a VOICE.md profile.
The agent runs on Opus rather than Sonnet because draft quality matters more than speed here. A better first draft means less editing time, which is where you actually save hours.
The draft it produces is not published directly. It’s a first draft. The agent knows this — the instructions explicitly say to write without self-censoring, produce the full-length draft, and flag anything that needed a guess rather than a fact. That last bit is important for catching fabrication.
Stages 4.25 through 4.5: The Intra-Draft Process
This is something I borrowed from Ann Handley’s writing framework and encoded into the pipeline. After the first draft exists, there are sequential passes before the review agent sees it:
- A trimming pass that cuts clichés, fat phrases, and sentences that don’t advance the argument
- An empathy pass that reads the draft as the subscriber, not the writer — asking “does this serve them or me?”
- A voice and style pass that adds personality, adjusts the tone, and checks the headers
These passes are built into the newsletter-writer agent’s instructions. The agent doesn’t just produce a first draft and stop — it refines through these stages before handing off.
Stage 5: Review
The draft-reviewer agent handles quality checking and fixes issues directly. It doesn’t produce a report for me to implement — it edits the file, then reports what changed.
What it checks: AI slop phrases (the kind that make readers unconsciously disengage), staccato fragment clusters, weak openings, fat phrases, reading level, and voice alignment. If the draft sounds like generic AI output rather than me, the reviewer flags specific paragraphs and rewrites them.
This is the gate that keeps AI-generated content from being obviously AI-generated.
Stage 6: Polish
A final human read. This is not the same as Stage 5 — the reviewer handles the mechanical quality checks. This read is for editorial judgement: does the issue actually say something? Is the opening compelling? Does it earn five minutes of a reader’s time?
This is typically fifteen to twenty minutes and cannot be delegated. It’s where I make the call to cut a section that’s technically fine but doesn’t fit, or to add a specific detail that only I would know to add.
Stage 7: Queue
Schedule the issue in Kit.com and prepare the social snippets. The Kit CLI handles the scheduling — more on that in Module 4.