Agentic music instrument discovery system. Five autonomous AI agents explore a 35-parameter synthesis space, discovering novel sounds through evolutionary search, LLM-directed generation, statistical correlation analysis, and cross-session memory — running overnight while you sleep.
GENERATOR ──→ CRITIC ──→ EXPLORER
↑ │ │
└──────────────┘ │
DISPATCH ←───────────┘
│
MEMORY (cross-session)
Traditional sound design is constrained by what you can imagine and describe. Instrument Lab removes that constraint by deploying agents that explore the parameter space autonomously, score discoveries against your aesthetic criteria, surface emergent patterns you didn't anticipate, and accumulate knowledge across sessions.
The result: a growing library of scored, reasoned variations in sonic territory you wouldn't have found manually — with a full audit trail of what the agents discovered and why.
Generator — Creates parameter variations each iteration using four strategies, mixed in the hybrid mode:
random— uniform sampling across all 35 parametersevolutionary— tournament selection, Gaussian mutation, uniform/group/blend crossoverdirected— LLM proposes specific parameter values based on population analysisedge— deliberately probes chaos-boundary regions (feedback → 0.98, mod_index → 12, etc.)
Critic — Evaluates each variation against your aesthetic criteria. Scores 0–10 with reasoning, parameter highlights, concerns, and a surprise_potential rating. Runs concurrent API calls (configurable parallelism).
Explorer — Runs every N iterations. Two analyses in parallel:
- Statistical correlation: finds which parameters most strongly predict score (pure Python, no API)
- LLM analysis: identifies patterns, unexplored regions, emergent surprises, and testable hypotheses
Updates focused_params to steer the Generator toward high-value regions.
Dispatch — Interprets natural language commands typed during a run. Updates aesthetic criteria, redirects strategy, focuses parameter groups. All commands logged to SQLite.
Memory — Cross-session learning. At startup: loads top variations from all prior sessions, synthesizes a "prior knowledge" paragraph to seed the current exploration. At shutdown: writes session learnings to persist for future runs.
Everything writes to SQLite (variations.db). Nothing is lost between runs. The library command shows cross-session history, hall-of-fame rankings, and pinned variations across all sessions.
When --osc is enabled, sends parameter bundles over UDP to any OSC-capable synthesizer. Includes a SuperCollider SynthDef generator (python main.py supercollider) with full FM synthesis, physical modeling, envelope, filter, LFO, and effects — ready to receive parameters in real time as the agents explore.
git clone https://github.com/yourusername/instrument-lab
cd instrument-lab
pip install -r requirements.txt
export ANTHROPIC_API_KEY=sk-ant-...Python 3.10+ required.
# Quick test — verify everything works (~$0.20)
python main.py run --iterations 5 --strategy random
# Overnight run — leave running, check in the morning
python main.py run --iterations 40 --strategy hybrid --session night_01
# With SuperCollider audio output
python main.py supercollider --output receiver.scd
python main.py run --osc --osc-port 57120 --iterations 40
# Custom aesthetic criteria
python main.py run --criteria-file my_criteria.txt --iterations 40
# Browse results
python main.py library
python main.py report night_01
python main.py export 42While a run is active, type directly into the terminal:
go darker Redirect toward darker timbres
more chaos Increase instability and feedback
converge Focus on best-scoring parameter region
explore physical model edges Push stiffness, coupling, damping
focus on long attacks Rewrite attack/release criteria
pin 42 Protect variation #42 from culling
report Print current session status
q Stop gracefully, save learnings
| Group | Parameters |
|---|---|
| oscillator | frequency, detune, mod_index, harmonicity, waveform, phase_offset |
| envelope | attack, decay, sustain, release, attack_curve, release_curve |
| filter | filter_freq, filter_q, filter_type, filter_env_amt, filter_key_track |
| modulation | lfo_rate, lfo_depth, lfo_target, lfo_shape, lfo_phase, mod_env_amt |
| effects | feedback, delay_time, delay_mix, reverb_size, reverb_damp, distortion |
| physical_model | stiffness, damping, tension, resonator_freq, resonator_decay, coupling |
The physical_model group models string/membrane physics — stiffness, damping, tension, and a body resonator — and its non-linear interactions with the effects group (especially feedback) are consistently the most generative region for emergent discovery.
The Critic agent scores variations against natural language criteria. Quality of criteria directly determines quality of discovery.
# example_criteria.txt
TIMBRAL CHARACTER:
- Sounds should feel like metal cooling at night — resonant, heavy, slow
- Prefer inharmonic overtone structures over pure harmonic series
- Organic instability is rewarded; static timbres are not
SPECTRAL:
- Richness in 200-800Hz range valued above all
- Avoid harsh artifacts above 6kHz unless they serve texture
TEMPORAL:
- Slow attacks (2+ seconds) preferred
- Long resonant tails that blur into silence
PHYSICAL MODEL:
- High coupling + low damping = interesting resonance behaviors
- Stiffness in 0.1-0.3 range tends toward organic character
EDGE CASES:
- Feedback above 0.88 combined with low damping: high priority to explore
- Interactions between physical_model and distortion: flag all surprises
Pass with --criteria-file example_criteria.txt. The Dispatch agent can amend criteria mid-run, and changes persist in the database.
When --osc is enabled, each scored variation sends:
| Address | Values |
|---|---|
/instrument_lab/variation |
id, score |
/instrument_lab/oscillator |
frequency, detune, mod_index, harmonicity, waveform, phase_offset |
/instrument_lab/envelope |
attack, decay, sustain, release, attack_curve, release_curve |
/instrument_lab/filter |
filter_freq, filter_q, filter_type, filter_env_amt, filter_key_track |
/instrument_lab/modulation |
lfo_rate, lfo_depth, lfo_target, lfo_shape, lfo_phase, mod_env_amt |
/instrument_lab/effects |
feedback, delay_time, delay_mix, reverb_size, reverb_damp, distortion |
/instrument_lab/physical_model |
stiffness, damping, tension, resonator_freq, resonator_decay, coupling |
/instrument_lab/param/{name} |
Individual float per parameter |
/instrument_lab/variation/insight |
type, content (Explorer discoveries) |
Compatible with SuperCollider, Max/MSP, Pure Data, VCV Rack (with OSC plugin), and any DAW with OSC support.
Uses claude-sonnet-4-20250514 ($3/M input tokens, $15/M output tokens).
| Run | Iterations | Estimated Cost |
|---|---|---|
| Quick test | 5 | ~$0.05–0.15 |
| Short session | 10 | ~$0.20–0.40 |
| Overnight | 40 | ~$1.00–2.50 |
| Extended | 100 | ~$3.00–6.00 |
Set a spending cap in the Anthropic Console before overnight runs.
instrument-lab/
├── main.py CLI entry point — run, library, report, export, supercollider
├── config.py SystemConfig dataclass + 35-parameter space definition
├── database.py SQLite persistence — variations, insights, sessions, dispatch log
├── genetics.py Mutation, crossover, tournament selection, diversity metrics
├── agents.py Generator, Critic, Explorer — async agents with full API integration
├── dispatch.py Dispatch + Memory agents — coordination and cross-session learning
├── orchestrator.py Central async event loop — coordinates all five agents
├── osc_bridge.py OSC output + SuperCollider SynthDef template generator
├── ui.py Rich live terminal dashboard
└── requirements.txt
anthropic>=0.86.0
aiosqlite>=0.19.0
rich>=13.0.0
python-osc>=1.8.0
MIT