A Language Server Protocol implementation for analyzing and improving AI prompt files. Works with .prompt.md, .agent.md, and .instructions.md files — providing LLM-powered semantic analysis directly in VS Code.
- Contradiction Detection — Finds logical, behavioral, and format conflicts
- Semantic Ambiguity — Ambiguity analysis with rewrite suggestions
- Persona Consistency — Detects conflicting personality traits and tone drift
- Cognitive Load Assessment — Warns about overly complex prompts with too many nested conditions
- Semantic Coverage — Identifies gaps in intent handling and missing error paths
- Composition Conflict Analysis — Detects conflicts between a prompt and other prompt files it imports via markdown links
- Editor Title Bar — Analyze Prompt button appears when editing prompt files
- Command Palette —
Chat Customizations Evaluations: Analyze Promptcommand - Problems Panel — All diagnostics appear in the standard VS Code Problems panel with precise line and column locations
| Pattern | Type |
|---|---|
*.prompt.md |
Prompt |
*.agent.md |
Agent |
*.instructions.md |
Instructions |
git clone https://github.com/microsoft/vscode-chat-customizations-evaluation.git
cd vscode-chat-customizations-evaluation
npm install
npm run buildThen press F5 in VS Code to launch the Extension Development Host.
- Open any supported prompt file in VS Code
- Run Chat Customizations Evaluations: Analyze Prompt from the command palette or click the beaker icon in the editor title bar
- View results in the Problems panel (
Ctrl+Shift+M/Cmd+Shift+M)
LLM analysis requires GitHub Copilot — no API keys needed. Just sign in to GitHub Copilot in VS Code.
| Command | Description |
|---|---|
Chat Customizations Evaluations: Analyze Prompt |
Run full LLM-powered analysis on the active file |
| Setting | Default | Description |
|---|---|---|
chatCustomizationsEvaluations.enable |
true |
Enable/disable the extension |
chatCustomizationsEvaluations.trace.server |
off |
Trace communication between VS Code and the language server |
┌─────────────────────────────────────────────────────────────┐
│ Prompt Document │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ LLM Analysis │
│ │
│ • Contradictions & persona consistency │
│ • Ambiguity & cognitive load │
│ • Coverage gaps & missing error handling │
│ • Composition conflicts (cross-file) │
│ │
│ Triggered: manually via command │
│ Powered by: GitHub Copilot (vscode.lm API) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ Diagnostics → Problems Panel │
└─────────────────────────────────────────────────────────────┘
src/
├── server.ts # Server entry point, diagnostics
├── types.ts # Shared TypeScript types and interfaces
├── analyzers/
│ └── llm.ts # LLM-powered analysis (all diagnostic categories)
└── __tests__/
└── llm.test.ts # LLM analyzer tests
client/
├── src/extension.ts # VS Code extension activation, LLM proxy
└── package.json # Extension manifest
npm run compile # Build server only
npm run build # Build server + client
npm test # Run tests (vitest)
npx vitest # Run tests in watch mode
npm run lint # Run ESLintPress F5 in VS Code to launch the Extension Development Host for manual testing.
MIT