A corrective wrapper for Claude Code. Re-reads a rules file at the start of every turn so the model doesn't drift on long sessions.
Helps prevent:
- Fabricated numbers (claims that sound plausible but can't be reproduced from the repo).
- Silent plan changes (agreeing to one approach and quietly doing another when it hits a snag).
- Out-of-scope edits (touching files or directories you didn't ask it to).
- Deflection when corrected (answering "here's what should have happened" instead of "why I did what I did").
These are failure modes we ran into during development. The hook fixed them.
Two files. No dependencies.
Claude Code's settings.json supports a UserPromptSubmit hook. Whatever command it runs, the output gets injected into the model's context before every user turn. We point it at a rules file:
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{ "type": "command", "command": "cat ~/.claude/critical_rules.md" }
]
}
]
}
}The rules get re-read every turn. They don't drift out of context on long sessions.
-
Copy
critical_rules.template.mdto~/.claude/critical_rules.md. -
Open it. Search for
<<. Every match is a placeholder. Replace each one with a real failure from your own work, with concrete details: what happened, what it cost. Delete any rule that doesn't match a failure you've had. -
Add the hook to
~/.claude/settings.json:{ "hooks": { "UserPromptSubmit": [ { "hooks": [ { "type": "command", "command": "cat ~/.claude/critical_rules.md" } ] } ] } } -
Start a new Claude Code session. The rules should appear as a system reminder at the top of each turn.
- Keep rules backed by real incidents. Generic rules won't change behavior.
- Keep the file under 200 lines. Cut dead rules.
- Review outputs. This reduces named failure modes, it doesn't eliminate error.
Built at SYNTEX LLC. https://syntexsecurity.com
Authored by Aaron Willhite with Claude.
MIT.