We build operator-focused security tools. AI coding assistants are part of how we do that. This policy is not anti-AI -- it is pro-accountability.
Think of AI assistance like spellcheck. It catches typos, suggests corrections, and speeds up the mechanical parts of writing. But you are still responsible for your words and their consequences.
You own every line you submit. You must be able to explain what it does and how it interacts with the rest of the system without asking your AI to explain it back to you.
Everything else follows from that.
-
Disclose your tools. Note what you used in your PR description -- Claude Code, Copilot, Cursor, whatever. No specific format required.
-
Review AI-generated text before posting. Issues, discussions, and PR descriptions must reflect your understanding, not a language model's first draft. Read it, cut the filler, make sure it says what you mean.
-
No AI-generated media. No generated images, logos, audio, or video. Text-based diagrams (ASCII art, Mermaid) and code are acceptable.
-
Unreviewed output gets closed. Hallucinated APIs, boilerplate that ignores project conventions, suggestions you clearly did not run -- these get closed without review. We are not a QA service for your AI's output.
Transparent by design means knowing what the code does and why it is there. Tested under pressure means every change was understood by the person who submitted it. AI makes capable engineers faster. It does not replace the understanding that makes contributions trustworthy.
Every pull request is reviewed by a human. Submitting work you do not understand shifts that burden onto maintainers. That is not how we operate.
Use AI to learn the codebase. Read the code it generates. Run it. Break it. Then submit work that reflects your understanding. We will help you through review -- that deal only works if the code is yours.