Skip to content

triflare/labyrinth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Labyrinth

Labyrinth is a terminal-first autonomous AI orchestrator with:

  • Multi-provider routing (API + local Ollama).
  • Tool execution through structured [CMD: ...] tags.
  • Project-scoped memory + notes.
  • Continuous background training from real usage.
  • Self-improvement hooks (/self-improve, SELF_WRITE, SELF_PATCH).

Architecture

This repository is intentionally modular:

  • labyrinth/core.py — runtime loader.
  • labyrinth/core_components/core_*.py — named runtime components (bootstrap, registry, agent selection, Ollama runtime, tool executor, training runtime, UI runtime).
  • labyrinth/entrypoint.py — package entry shim.
  • ai-tui.py — compatibility launcher.
  • DOCS.md — full operational reference.
  • CONTRIBUTING.md — contributor rules (including file-size limits).

Runtime behavior is loaded from ordered components so multiple bots/contributors can edit in parallel without monolith merge conflicts.

Quick start

python ai-tui.py

Optional flags:

python ai-tui.py --project research --theme midnight --voice
python ai-tui.py --gui
python ai-tui.py --offline

How routing works

  1. Labyrinth classifies each prompt (fast, smart, code, long, vision).
  2. If forced model is set (/use ...), that wins.
  3. In offline mode it tries Ollama models.
  4. In online mode it tries API providers in priority chains.
  5. In any mode it can use either and fallback automatically.

Provider registry now includes many OpenAI-compatible backends (DeepSeek, Fireworks, Cerebras, xAI, Perplexity, SambaNova, NVIDIA NIM, Novita, plus existing providers).

Ollama reliability improvements

Labyrinth now:

  • Reads OLLAMA_HOST (defaults to http://127.0.0.1:11434).
  • Resolves model aliases against installed tag names.
  • Sends keep_alive to reduce cold-start churn.
  • Falls back between /api/chat and /api/generate.

Useful commands:

  • /offline-pack — show local bundle map.
  • /ollama-sync [daily|code|vision/photos|all] — pull suggested models.
  • /use local [model] — force local model.

Training system

Labyrinth trains from real conversations and loops.

  • Every AI response appended to history triggers training (if training mode is on).
  • /loop iterations now also feed trainer updates.
  • Training outputs are stored in:
    • ~/tui-ai/conf/trained_model.json
    • ~/tui-ai/logs/training.jsonl

Training commands

  • /train off|fg|bg|both — training execution mode.
  • /train-status — summary (pairs, version, hints, last trained).
  • /train-hints — latest self-mod hints.
  • /train-seed — inject built-in curriculum examples.
  • /train-test — run trainer pipeline self-test.
  • /train-agents <n> <prompt> — run threaded multi-agent rounds and feed outputs into training pairs.
  • /self-improve — apply top training hint as a self-modification prompt.

Labyrinth force-modes (/use)

Besides normal providers, you can force Labyrinth pipeline modes:

  • /use labyrinth-text (or /use text) — threaded multi-agent text synthesis.
  • /use labyrinth-photo-gen (or /use photo-gen) — image generation mode with optional automatic description.
  • /use labyrinth-desc (or /use desc) — image-description mode (attach image with /open first).
  • /use off — return to normal auto-routing.

You can run /agents-test to quickly smoke-test the threaded multi-agent path.

GUI improvements

  • GUI now supports selecting from all registered TUI themes via a theme dropdown.
  • GUI status separates routing vs generation stages.
  • Image display attempts Kitty first, then platform openers (Linux/macOS/Windows) where available.

Extra operational commands:

  • /docker-reinstall — recreate docker env and reinstall baseline tools.
  • /photo-desc [path] — describe an attached image or image file path.
  • /graph y=2x+1 — open a Tk graph window for a simple line equation.

Tool surface (AI command tags)

Labyrinth can execute many tool tags embedded in model output, including:

  • Files: READ, WRITE, APPEND, DELETE, FIND, SEARCH, TREE, DU, ZIP, UNZIP.
  • Shell/network: BASH, SH, FETCH, CURL, WGET, POST, HTTP_HEAD, PING.
  • Data/API helpers: WIKI, FX, GITHUB_REPO, GITLAB_PROJECT.
  • Math/visual: MATH, GRAPH.
  • Code quality: PYTEST, RUFF, TYPECHECK, FORMAT, PROFILE.
  • Docker: DOCKER_START, DOCKER_EXEC, DOCKER_SH, DOCKER_RESET, DOCKER_NUKE, etc.
  • Self-mod: SELF_READ, SELF_WRITE, SELF_PATCH.

Recommended usage loop for improving the agent

  1. Run tasks normally.
  2. Periodically run /train-status and /train-hints.
  3. Run /train-seed once per new project domain.
  4. Validate trainer health with /train-test.
  5. Use /self-improve carefully and restart when prompted.
  6. Benchmark with /bench / /rebenchmark.

This creates a practical feedback loop where each AI interaction contributes to future behavior.

Full Reference

See DOCS.md for a complete command/tool/runtime reference (venv, docker model, routing, AI tag surface, and training lifecycle).

About

A terminal-first AI orchestrator, built in Python.

Topics

Resources

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Contributors

Languages