Labyrinth is a terminal-first autonomous AI orchestrator with:
- Multi-provider routing (API + local Ollama).
- Tool execution through structured
[CMD: ...]tags. - Project-scoped memory + notes.
- Continuous background training from real usage.
- Self-improvement hooks (
/self-improve,SELF_WRITE,SELF_PATCH).
This repository is intentionally modular:
labyrinth/core.py— runtime loader.labyrinth/core_components/core_*.py— named runtime components (bootstrap, registry, agent selection, Ollama runtime, tool executor, training runtime, UI runtime).labyrinth/entrypoint.py— package entry shim.ai-tui.py— compatibility launcher.DOCS.md— full operational reference.CONTRIBUTING.md— contributor rules (including file-size limits).
Runtime behavior is loaded from ordered components so multiple bots/contributors can edit in parallel without monolith merge conflicts.
python ai-tui.pyOptional flags:
python ai-tui.py --project research --theme midnight --voice
python ai-tui.py --gui
python ai-tui.py --offline- Labyrinth classifies each prompt (
fast,smart,code,long,vision). - If forced model is set (
/use ...), that wins. - In
offlinemode it tries Ollama models. - In
onlinemode it tries API providers in priority chains. - In
anymode it can use either and fallback automatically.
Provider registry now includes many OpenAI-compatible backends (DeepSeek, Fireworks, Cerebras, xAI, Perplexity, SambaNova, NVIDIA NIM, Novita, plus existing providers).
Labyrinth now:
- Reads
OLLAMA_HOST(defaults tohttp://127.0.0.1:11434). - Resolves model aliases against installed tag names.
- Sends
keep_aliveto reduce cold-start churn. - Falls back between
/api/chatand/api/generate.
Useful commands:
/offline-pack— show local bundle map./ollama-sync [daily|code|vision/photos|all]— pull suggested models./use local [model]— force local model.
Labyrinth trains from real conversations and loops.
- Every AI response appended to history triggers training (if training mode is on).
/loopiterations now also feed trainer updates.- Training outputs are stored in:
~/tui-ai/conf/trained_model.json~/tui-ai/logs/training.jsonl
/train off|fg|bg|both— training execution mode./train-status— summary (pairs, version, hints, last trained)./train-hints— latest self-mod hints./train-seed— inject built-in curriculum examples./train-test— run trainer pipeline self-test./train-agents <n> <prompt>— run threaded multi-agent rounds and feed outputs into training pairs./self-improve— apply top training hint as a self-modification prompt.
Besides normal providers, you can force Labyrinth pipeline modes:
/use labyrinth-text(or/use text) — threaded multi-agent text synthesis./use labyrinth-photo-gen(or/use photo-gen) — image generation mode with optional automatic description./use labyrinth-desc(or/use desc) — image-description mode (attach image with/openfirst)./use off— return to normal auto-routing.
You can run /agents-test to quickly smoke-test the threaded multi-agent path.
- GUI now supports selecting from all registered TUI themes via a theme dropdown.
- GUI status separates routing vs generation stages.
- Image display attempts Kitty first, then platform openers (Linux/macOS/Windows) where available.
Extra operational commands:
/docker-reinstall— recreate docker env and reinstall baseline tools./photo-desc [path]— describe an attached image or image file path./graph y=2x+1— open a Tk graph window for a simple line equation.
Labyrinth can execute many tool tags embedded in model output, including:
- Files:
READ,WRITE,APPEND,DELETE,FIND,SEARCH,TREE,DU,ZIP,UNZIP. - Shell/network:
BASH,SH,FETCH,CURL,WGET,POST,HTTP_HEAD,PING. - Data/API helpers:
WIKI,FX,GITHUB_REPO,GITLAB_PROJECT. - Math/visual:
MATH,GRAPH. - Code quality:
PYTEST,RUFF,TYPECHECK,FORMAT,PROFILE. - Docker:
DOCKER_START,DOCKER_EXEC,DOCKER_SH,DOCKER_RESET,DOCKER_NUKE, etc. - Self-mod:
SELF_READ,SELF_WRITE,SELF_PATCH.
- Run tasks normally.
- Periodically run
/train-statusand/train-hints. - Run
/train-seedonce per new project domain. - Validate trainer health with
/train-test. - Use
/self-improvecarefully and restart when prompted. - Benchmark with
/bench//rebenchmark.
This creates a practical feedback loop where each AI interaction contributes to future behavior.
See DOCS.md for a complete command/tool/runtime reference (venv, docker model, routing, AI tag surface, and training lifecycle).