Skip to content

ajbarea/kourai-khryseai

Repository files navigation

🏛️ Κοῦραι Χρύσεαι

Kourai KhryseaiThe Golden Maidens

Six specialized AI agents that collaborate with you on development—you guide each step, they show their work, iterate in real-time.

A2A Protocol Python uv codecov License


Collaborate, don't automate. See your agents think. Guide them in real-time.

$ make cli

❯ add user authentication

🔥 Hephaestus: I'm thinking through the approach...
📐 Metis: Specification drafted. Should we use JWT or sessions?

❯ JWT with refresh tokens

✅ Metis: Got it. Full spec: 8 steps, edge cases noted
⚙️ Techne: Writing files... (streaming changes live)
🧪 Dokimasia: Running tests... 12/12 passing
✨ Kallos: Code review complete, no style issues
📜 Mneme: Ready for commits

What is this?

Kourai Khryseai is an interactive multi-agent development system where six specialized AI agents work with you, not for you. Instead of running autonomously in the background, agents stream their work in real-time, show their reasoning, and ask for guidance when decisions matter.

You describe your goal. The agents break it down, show you options, and execute your feedback. You see everything—from planning through testing through review—and can redirect at any step.

Access it two ways:

  • CLI — Real-time agent output in your terminal
  • GUI — Interactive dialogue with personality-matched voices and visual agent profiles

The Agents

Agent Role Strength
🔥 Hephaestus Orchestrator Routes requests to the right specialists, manages feedback loops
📐 Metis Planner Breaks goals into detailed specs, identifies edge cases
⚙️ Techne Coder Reads existing patterns, writes clean changes
🧪 Dokimasia Tester Writes comprehensive test suites, validates coverage
Kallos Stylist Enforces code quality, cleans comments and docstrings
📜 Mneme Scribe Generates organized commit messages from diffs

Each is an independent HTTP server communicating via the open A2A protocol. They can be deployed separately, tested independently, or swapped for custom implementations.


How It Works

1. You Start a Conversation

CLI (interactive REPL):

$ make cli

❯ implement CSV export with tests

Or GUI (visual interface):

$ make gui

2. Hephaestus Orchestrates

The orchestrator routes your request through a pipeline. Most requests flow: Metis → Techne → Dokimasia → Kallos → Mneme. Quick fixes skip planning. Pure styling requests skip coding. Hephaestus routes intelligently.

3. Agents Stream Their Work

Each agent shows you:

  • What they're thinking — Real-time reasoning and planning
  • What they're building — Code diffs, test runs, lint results
  • What they need — Questions when decisions matter
📐 Metis: Analyzing requirements...
   → Should CSV use streaming for large files? (Option A: yes, Option B: no)

❯ Option A, streaming

📐 Metis: Confirmed. Here's the spec:
   - Parser with iterator interface
   - Chunked I/O for >100MB files
   - Tests cover edge cases...

✅ Spec complete. Routing to Techne

4. Human-on-the-Loop

When agents face meaningful choices, they ask. You provide direction. This prevents wasted tokens on speculation and keeps you in control of trade-offs:

  • Architecture decisions (sync vs async, database strategy)
  • Scope boundaries (what counts as "done")
  • Validation rules (what passes, what fails)

5. Feedback Loops

If Kallos finds issues Techne can fix, they iterate up to 3 rounds automatically. Otherwise, they report what remains. Nothing silent.


Quick Start

1. Install

git clone https://github.com/ajbarea/kourai-khryseai.git
cd kourai_khryseai

make setup        # Install dependencies (equivalent: uv sync --all-packages)
cp .env.example .env
# Edit .env — add your ANTHROPIC_API_KEY

Using Ollama instead (free, local)?

# Install Ollama, pull models, then:
KOURAI_PROVIDER=local make cli

2. Start the Agents

make up           # Builds Docker images, starts all 6 agents + Jaeger + Prometheus
make status       # Check health

3. Your First Conversation

CLI:

make cli

❯ implement CSV export with tests

GUI (richer experience with voices and visuals):

make gui

See Getting Started for detailed setup and troubleshooting.


Architecture

                    YOU (CLI or GUI)
                           │
                      A2A · SSE
                           ▼
                  🔥 HEPHAESTUS (Orchestrator)
                         :10000
                    ┌─────┼─────┐
         A2A        │     │     │      A2A
      ┌──────────┬──┤     │     ├──┬──────────┐
      │          │  │     │     │  │          │
   📐 METIS  ⚙️ TECHNE  🧪 DOKIMASIA  ✨ KALLOS  📜 MNEME
   :10001     :10002     :10003      :10004    :10005
      │          │  │     │     │  │          │
      └──────────┴──┤     │     ├──┴──────────┘
                  │     │     │
                 MCP Servers
              (filesystem, git, shell)
                     │
              OpenTelemetry → Jaeger ◄──► Prometheus
                   :16686 (UI)         :9090 (UI)

Key points:

  • Each agent is an independent HTTP server with its own model assignment
  • A2A protocol enables peer-to-peer communication without a central broker
  • Real-time streaming via SSE allows agents to show work as it happens
  • MCP servers handle filesystem, git, and shell access
  • Jaeger + Prometheus trace every request and monitor performance

Multi-Mode Access

CLI (Terminal)

Fast, scriptable, works over SSH. See real-time agent output with emoji progress.

❯ add authentication to /api/users

🔥 Hephaestus: Routing to [techne, dokimasia, kallos, mneme]...
⚙️  Techne: Analyzing existing auth patterns...
   ↳ Found JWT middleware in src/middleware/auth.py
   ↳ Writing changes to 2 files...
   ↳ [100%] Complete
🧪 Dokimasia: Running tests...
   ↳ [5/5 passing]
...

GUI (Desktop)

Visual interface with agent portraits, dialogue bubbles, and neural text-to-speech. Each agent has a personality-matched voice. Dialogue history is saved per session.

  • 🎨 Full-color agent portraits (JRPG aesthetic)
  • 💬 Real-time dialogue with streaming responses
  • 🔊 Low-latency neural voice synthesis (Kokoro-82M local SLM + Edge-TTS fallback)
  • 170ms "Human-Like" Latency via real-time audio chunk streaming
  • ⚙️ Settings for accessibility and voice customization
  • 📜 Scrollable chat history

Configuration

LLM Models

Choose model tiers per environment. Default uses Haiku (fast, cheap). Upgrade to Sonnet or Opus as needed:

# .env
KOURAI_PROVIDER=claude        # or 'local' for Ollama
KOURAI_MODEL_TIER=standard    # cheap | standard | smart
Tier Hephaestus Metis Techne Dokimasia Kallos Mneme
cheap Haiku Haiku Haiku Haiku Haiku Haiku
standard Sonnet Opus Sonnet Sonnet Haiku Haiku
smart Opus Opus Opus Sonnet Sonnet Sonnet

TTS Backends

Kourai Khryseai prioritizes local execution for privacy and speed.

  • Kokoro-82M (Default): High-quality, Apache 2.0 local TTS. Runs on CPU with ~350MB RAM.
  • Edge-TTS (Fallback): Microsoft Azure Neural voices (requires internet).

See Configuration for full environment variable reference.


Development

make test       # Run unit and integration tests (80%+ coverage)
make lint       # Run ruff, ty, formatters
make docs       # Serve docs locally at http://localhost:8000
make help       # Show all available commands

Stack:

  • Frameworka2a-sdk + Starlette
  • Language — Python 3.12+ with modern type hints
  • LLMLiteLLM (pluggable: Claude, Gemini, Ollama, etc.)
  • TTSKokoro-82M (Local) / Edge-TTS (Cloud) with real-time streaming
  • MCPMCP (filesystem, git, shell, context7)
  • Browser ContextAccessibility-Tree Snapshots for token-efficient E2E reasoning
  • LintingRuff + ty (Python)
  • Packaginguv workspaces
  • ObservabilityOpenTelemetryJaeger + Prometheus
  • Containers — Docker + Docker Compose
  • DocsZensical

Pipelines

Hephaestus auto-selects the right pipeline based on your request:

You say Pipeline
"implement feature X" Metis → Techne → Dokimasia → Kallos → Mneme
"fix bug in X" Techne → Dokimasia → Kallos → Mneme
"add tests for X" Dokimasia → Kallos → Mneme
"clean up X" Kallos → Mneme
"commit prep" Mneme
"plan feature X" Metis
"@techne, explain this function" Techne (1-on-1)

Observability

Every request creates a distributed trace across all agents. Open Jaeger at localhost:16686 or Prometheus at localhost:9090 to see:

  • Full request flow as a single trace
  • Per-agent LLM call latency
  • Error locations and context
  • RED metrics (Rate, Error, Duration) via Jaeger SPM
  • Real-time performance visualization

Documentation

Full docs are available at Kourai Khryseai, built with Zensical.

Key sections:


Coverage

codecov sunburst


License

MIT


Built by AJ Barea · Forged in the workshop of Hephaestus

About

Kourai Khryseai: Golden Maidens crafted by Hephaestus — autonomous attendants forged to serve.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors