Skills That Edit and Fix
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 3: Skills That Edit and Fix
There’s a failure mode in skill design that looks useful until you’ve lived with it for a week.
The skill runs. It produces a beautiful report. Seventeen issues found, all documented in a structured table, severity rated, examples quoted. You read it. Now you have to go fix seventeen things. The skill made a report. You have to do the work.
This is the critic pattern. It’s seductive — crisp output, clear findings, looks like value. But the value isn’t in the list. The value is in the fixed file.
The alternative: editors, not critics.
The Principle
Any skill that reviews, audits, or analyses content must apply its findings directly. Not “consider changing X to Y.” Change X to Y. Report what changed, before and after. Leave the file better than you found it.
This is a design principle baked into the system’s CLAUDE.md:
Any agent that reviews, audits, or analyzes content MUST have Edit tool in its toolset. Apply fixes directly, not just recommend them. Report what was changed (before/after), not what should change.
The Edit tool is the difference. Without it, you have a critic. With it, you have an editor.
writing-quality: A Worked Example
writing-quality is the unified writing audit skill. It checks for AI slop patterns (overused phrases, staccato fragments, passive voice clusters), structural craft problems (weak openings, exit points, no momentum), reading level violations, and grammar credibility issues (Frankenwords, fat phrases, misplaced modifiers).
It doesn’t produce a report for the author to act on. It fixes what it finds:
**What to fix automatically:**
- Tier 1 slop phrases (always fix)
- Tier 2 slop phrases (fix unless context-dependent)
- Passive voice (convert to active)
- Staccato fragment clusters (combine)
- Weak openings (strengthen)
- Abstract adjectives (make specific)
- Fat phrases and Frankenwords
- Reading level violations
The skill body draws a clear line: auto-fix structural problems; flag for author decision anything that changes meaning or involves genuine stylistic choice. Editors make calls within their domain. They don’t rewrite the author’s argument.
voice-editor: The Companion Skill
voice-editor works alongside writing-quality in the review pipeline. It has a different scope: not writing craft generally, but alignment to a specific voice profile.
The skill loads a VOICE.md file — a structured description of the author’s patterns, preferences, and characteristic moves — and compares the draft against it. Where the draft sounds like generic AI output instead of the author, it edits.
The key design decision: voice-editor doesn’t duplicate the craft checks that writing-quality handles. One skill covers one domain. Each does its job without stepping on the other.
What “Tools” This Requires
Skills that edit need Read and Edit (or Write) in their tool declaration. Without Read, the skill can’t see the file content to work on. Without Edit or Write, it can’t apply changes.
This is why editors require the agent form factor. A pure skill — no tools declared — can only shape text Claude already has in context. To work on files on disk, you need tool access.
Here’s how writing-quality is declared:
---
name: writing-quality
tools: [Read, Edit, Write]
---
That’s the minimal declaration for a skill that opens files and writes changes back.
The Workflow Pattern
Skills that edit typically follow this sequence:
- Read the target file or accept content passed in
- Classify the issues (what category, what severity)
- Fix each issue using Edit, in priority order
- Report what changed — before and after for each edit
The report matters. Not because the author needs to approve the edits — they’re already applied — but because the author needs to understand what the skill did. A black box that changes your file isn’t trustworthy. An editor that shows its work is.
Building Your First Editor
Take a skill you already have. Ask: what does it find? Is there any finding it consistently makes that could be fixed automatically, without needing the author’s judgment?
Start there. Add Read and Edit to the tools declaration. Add an “apply fixes” section to the process. Test it on a real file.
The discipline is in the line between automatic fixes and author decisions. Everything that’s clearly wrong (broken format, style guide violation, factual pattern problem) gets fixed. Everything that’s genuinely ambiguous gets flagged. Never the reverse.
Next: when skills need more reach, they become agents.
Check Your Understanding
Answer all questions correctly to complete this module.
1. What is the 'editors, not critics' principle?
2. Why does the writing-quality skill report what it changed (before/after)?
3. What tools must a skill that edits files include in its declaration?
Pass the quiz above to unlock
Save failed. Please try again.