An AI agent skill that makes you learn concepts the hard way: by explaining them out loud and getting your gaps exposed.
/feynman JWT signatures → the AI plays a strict critic, points out ONE gap in your explanation per turn, and loops until you actually understand it (or you tap out).
No summaries. No hand-holding. The AI never gives you the answer — you have to produce it yourself. That's the whole point.
An implementation of the Feynman Technique as an interactive agent skill. Named after physicist Richard Feynman's learning method: if you can't explain something simply, you don't understand it.
The skill acts as the missing piece most learners lack — a critic who stress-tests your explanation and refuses to let you get away with fluent-sounding BS.
Self-study suffers from two well-documented illusions:
- Fluency illusion: re-reading or re-explaining feels like learning, but the explanation gets smoother without getting deeper.
- Illusion of Explanatory Depth (IOED): people believe they can explain mechanisms they actually can't.
Feynman Technique alone doesn't fix these — you still grade your own homework. This skill adds the missing critic.
Pick your platform:
- Claude Code (drop the skill into
.claude/skills/) - Copilot CLI
- Gemini CLI
- Codex CLI
- Generic Claude API (system prompt version)
Full installation guide: INSTALL.md.
/feynman [concept you want to master]
Example:
/feynman TCP three-way handshake
The AI will:
- Ask you to explain it in your own words
- Quote you, point out ONE gap, ask a probing question
- Repeat until 4 of 5 mastery criteria are met (or you say "done")
- Save a session log you can revisit or resume
User: /feynman DNS
AI: 🎓 Feynman session started: DNS
Rules: explain simply, I point out ONE gap per turn, say "done" to wrap up.
Go ahead.
User: DNS is like a phone book. You type a domain and it gives you an IP.
AI: ✅ The phone-book analogy captures the indexing idea.
🔍 [mechanism-blackbox] spot:
> "gives you an IP"
Who does the lookup, and where? A phone book has a shelf.
💬 When you visit a brand-new site, who does your computer ask first?
User: My computer asks my router, I think?
AI: ✅ You're tracing a path, which is the right instinct.
🔍 [mechanism-blackbox] spot:
> "my router, I think"
The router forwards it somewhere. The hedging suggests this is the black box.
💬 What does the router do with that query?
[... continues until mastery ...]
- 🎯 7-category gap taxonomy with priority ordering
- 🔁 Multi-turn loop — one gap at a time, strict
- 🏁 4-of-5 mastery detection — knows when to stop
- 📁 Session logs — markdown files you can resume, revisit, or share
- 🌐 Language mirroring — AI replies in whatever language you explain in
- 🌍 i18n support — English + Korean shipped, more via community
- 🔌 5+ agent platforms supported
| Code | Category | What it catches |
|---|---|---|
[factual-error] |
Wrong claim | "HTTP is always encrypted" |
[jargon-dodge] |
Term defined by itself | "Recursion is when a function recurses" |
[causal-gap] |
No "why" | Describing what without motivation |
[mechanism-blackbox] |
Magic steps | "it just works" |
[boundary-blur] |
No differentiation | Can't tell X from Y |
[broken-analogy] |
Metaphor breaks | Analogy fails under stress |
[edge-case-blind] |
Happy path only | No handling of weird inputs |
Full details: docs/gap-taxonomy.md.
- Term Independence — defines core term without using itself
- Causal Chain — can explain why it exists
- Mechanism Transparency — step-by-step, no black boxes
- Boundary Differentiation — distinguishes from adjacent concepts
- Stress-Test Pass — handles at least one edge case
Full criteria: docs/mastery-criteria.md.
See examples/ for full session transcripts:
- Coding: JWT signature verification
- Science: Transformer attention mechanism
- Business: Product-market fit
The skill ships with language tone guides:
Want to add your language? See i18n/README.md and use the TEMPLATE.md.
Read docs/design-rationale.md for the research grounding (retrieval practice, desirable difficulty, IOED, metacognition) and why specific design decisions were made.
PRs welcome. See CONTRIBUTING.md — especially for new language translations and domain-specific example sessions.
MIT. See LICENSE.
Inspired by Richard Feynman's learning approach, popularized by Scott Young and the Farnam Street blog. Cognitive science grounding: Chi (self-explanation), Dunning-Kruger, Rozenblit & Keil (IOED), Sweller (cognitive load), Roediger & Karpicke (retrieval practice).