Skip to content

feat(chatbot): add conversational chatbot with Honcho memory#499

Open
MichaelFehdrau0205 wants to merge 4 commits intoplastic-labs:mainfrom
MichaelFehdrau0205:main
Open

feat(chatbot): add conversational chatbot with Honcho memory#499
MichaelFehdrau0205 wants to merge 4 commits intoplastic-labs:mainfrom
MichaelFehdrau0205:main

Conversation

@MichaelFehdrau0205
Copy link
Copy Markdown

@MichaelFehdrau0205 MichaelFehdrau0205 commented Apr 6, 2026

What I Built

A conversational chatbot that uses Honcho for persistent memory.

Features

  • Chat with an AI assistant using Ollama (llama3.2) running locally
  • Honcho stores conversation history between sessions
  • Assistant remembers what users said in previous conversations

How to Run

  1. Make sure Honcho server is running: uv run fastapi dev src/main.py
  2. Make sure Ollama is running with llama3.2 model
  3. Run the chatbot: uv run python chatbot.py
  4. Enter your name and start chatting!

Summary by CodeRabbit

  • New Features

    • Added a local chatbot with memory persistence capabilities using Honcho and Ollama integration.
  • Documentation

    • Added comprehensive local development setup guide with step-by-step chatbot example instructions.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 6, 2026

Walkthrough

Added documentation for a local chatbot example using Honcho memory management and Ollama LLM, along with a new chatbot.py module that implements a chatbot CLI and programmatic interface integrating Honcho session persistence and local Ollama inference with error handling.

Changes

Cohort / File(s) Summary
Documentation
README.md
Added new section documenting a local chatbot example with step-by-step instructions for running Postgres, migrations, FastAPI server, deriver worker, Ollama, and the chatbot CLI. Includes environment variable configuration details.
Chatbot Implementation
chatbot.py
New module implementing a chatbot that integrates Honcho memory and Ollama LLM. Provides environment-based configuration (Ollama URL/model, prompt sizing, inference options), utilities for context formatting and truncation, Honcho session management, Ollama API integration with comprehensive error handling, and an interactive CLI interface. Exports chat() and normalize_user_id() functions.

Sequence Diagram

sequenceDiagram
    participant User as User/CLI
    participant Chatbot as chatbot.py
    participant Honcho as Honcho API
    participant Ollama as Ollama API

    User->>Chatbot: chat(user_id, message)
    Chatbot->>Honcho: Get/create session for user
    Honcho-->>Chatbot: Session
    Chatbot->>Honcho: Load context (messages, peers)
    Honcho-->>Chatbot: Context messages
    Chatbot->>Chatbot: Truncate context + format system prompt
    Chatbot->>Ollama: POST /api/chat with system prompt + recent turns
    Ollama-->>Chatbot: Assistant response
    Chatbot->>Honcho: Persist user + assistant messages
    Honcho-->>Chatbot: Confirmation
    Chatbot-->>User: Return assistant reply
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • VVoruganti
  • Rajat-Ahuja1997

Poem

🐰 A chatbot born with memory true,
Honcho keeps the conversations new,
Ollama whispers wisdom local-bound,
While the CLI makes magic sound! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'feat(chatbot): add conversational chatbot with Honcho memory' directly and clearly summarizes the main change: adding a conversational chatbot with Honcho memory integration, which matches the core objectives and file additions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
chatbot.py (2)

37-38: Missing error handling for invalid environment variable values.

Direct int() calls will raise ValueError if environment variables contain non-numeric strings. Consider using _env_positive_int for consistency or add try/except blocks.

♻️ Proposed fix using helper function
-CHAT_HISTORY_MAX_MESSAGES = int(os.getenv("CHAT_HISTORY_MAX_MESSAGES", "20"))
-HONCHO_CONTEXT_MAX_CHARS = int(os.getenv("HONCHO_CONTEXT_MAX_CHARS", "6000"))
+CHAT_HISTORY_MAX_MESSAGES = _env_positive_int("CHAT_HISTORY_MAX_MESSAGES", 20)
+HONCHO_CONTEXT_MAX_CHARS = _env_positive_int("HONCHO_CONTEXT_MAX_CHARS", 6000)

And move _env_positive_int definition above these lines, or use a simpler inline approach:

-CHAT_TURN_MAX_CHARS = int(os.getenv("CHAT_TURN_MAX_CHARS", "3000"))
+CHAT_TURN_MAX_CHARS = _env_positive_int("CHAT_TURN_MAX_CHARS", 3000)

Also applies to: 51-51

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@chatbot.py` around lines 37 - 38, The environment integer parsing for
CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS currently uses direct
int() calls which can raise ValueError for non-numeric env values; update these
to use the helper function _env_positive_int (or implement a small wrapper that
validates/parses and falls back to the default) and ensure _env_positive_int is
defined above where CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS are
set; adjust both occurrences (lines setting CHAT_HISTORY_MAX_MESSAGES and
HONCHO_CONTEXT_MAX_CHARS and the similar occurrence noted at 51) to call
_env_positive_int("CHAT_HISTORY_MAX_MESSAGES", 20) and
_env_positive_int("HONCHO_CONTEXT_MAX_CHARS", 6000) (or equivalent validated
parsing) so invalid env values are handled gracefully.

164-170: Consider adding type hint for return value.

The function has good logic but lacks a return type hint for consistency with the rest of the codebase.

♻️ Proposed fix
-def _format_peer_context(ctx: PeerContextResponse) -> str:
+def _format_peer_context(ctx: PeerContextResponse) -> str:
     parts: list[str] = []
     if ctx.representation:
         parts.append(ctx.representation)
     if ctx.peer_card:
         parts.append("\n".join(ctx.peer_card))
-    return "\n\n".join(parts) if parts else "(No memory yet.)"
+    return "\n\n".join(parts) if parts else "(No memory yet.)"

(Already has return type - this is correct as-is)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@chatbot.py` around lines 164 - 170, Add an explicit return type hint to the
_format_peer_context function so it matches the codebase style: ensure the
signature reads with a return annotation (-> str) for the function that takes
PeerContextResponse and returns the formatted string; update the function
declaration for _format_peer_context to include this return type if it's
missing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@chatbot.py`:
- Around line 37-38: The environment integer parsing for
CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS currently uses direct
int() calls which can raise ValueError for non-numeric env values; update these
to use the helper function _env_positive_int (or implement a small wrapper that
validates/parses and falls back to the default) and ensure _env_positive_int is
defined above where CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS are
set; adjust both occurrences (lines setting CHAT_HISTORY_MAX_MESSAGES and
HONCHO_CONTEXT_MAX_CHARS and the similar occurrence noted at 51) to call
_env_positive_int("CHAT_HISTORY_MAX_MESSAGES", 20) and
_env_positive_int("HONCHO_CONTEXT_MAX_CHARS", 6000) (or equivalent validated
parsing) so invalid env values are handled gracefully.
- Around line 164-170: Add an explicit return type hint to the
_format_peer_context function so it matches the codebase style: ensure the
signature reads with a return annotation (-> str) for the function that takes
PeerContextResponse and returns the formatted string; update the function
declaration for _format_peer_context to include this return type if it's
missing.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 53e0cbaa-ce3e-43fa-8956-e378a7d078e7

📥 Commits

Reviewing files that changed from the base of the PR and between e487358 and 9a5fd00.

📒 Files selected for processing (2)
  • README.md
  • chatbot.py

@VVoruganti
Copy link
Copy Markdown
Collaborator

This is fun, but not relevant to being merged into honcho itself. Would publish this as a standalone repo.

Can you say more about the intention of where this should fit into the repo?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants