feat(chatbot): add conversational chatbot with Honcho memory#499
feat(chatbot): add conversational chatbot with Honcho memory#499MichaelFehdrau0205 wants to merge 4 commits intoplastic-labs:mainfrom
Conversation
WalkthroughAdded documentation for a local chatbot example using Honcho memory management and Ollama LLM, along with a new Changes
Sequence DiagramsequenceDiagram
participant User as User/CLI
participant Chatbot as chatbot.py
participant Honcho as Honcho API
participant Ollama as Ollama API
User->>Chatbot: chat(user_id, message)
Chatbot->>Honcho: Get/create session for user
Honcho-->>Chatbot: Session
Chatbot->>Honcho: Load context (messages, peers)
Honcho-->>Chatbot: Context messages
Chatbot->>Chatbot: Truncate context + format system prompt
Chatbot->>Ollama: POST /api/chat with system prompt + recent turns
Ollama-->>Chatbot: Assistant response
Chatbot->>Honcho: Persist user + assistant messages
Honcho-->>Chatbot: Confirmation
Chatbot-->>User: Return assistant reply
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
chatbot.py (2)
37-38: Missing error handling for invalid environment variable values.Direct
int()calls will raiseValueErrorif environment variables contain non-numeric strings. Consider using_env_positive_intfor consistency or add try/except blocks.♻️ Proposed fix using helper function
-CHAT_HISTORY_MAX_MESSAGES = int(os.getenv("CHAT_HISTORY_MAX_MESSAGES", "20")) -HONCHO_CONTEXT_MAX_CHARS = int(os.getenv("HONCHO_CONTEXT_MAX_CHARS", "6000")) +CHAT_HISTORY_MAX_MESSAGES = _env_positive_int("CHAT_HISTORY_MAX_MESSAGES", 20) +HONCHO_CONTEXT_MAX_CHARS = _env_positive_int("HONCHO_CONTEXT_MAX_CHARS", 6000)And move
_env_positive_intdefinition above these lines, or use a simpler inline approach:-CHAT_TURN_MAX_CHARS = int(os.getenv("CHAT_TURN_MAX_CHARS", "3000")) +CHAT_TURN_MAX_CHARS = _env_positive_int("CHAT_TURN_MAX_CHARS", 3000)Also applies to: 51-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@chatbot.py` around lines 37 - 38, The environment integer parsing for CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS currently uses direct int() calls which can raise ValueError for non-numeric env values; update these to use the helper function _env_positive_int (or implement a small wrapper that validates/parses and falls back to the default) and ensure _env_positive_int is defined above where CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS are set; adjust both occurrences (lines setting CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS and the similar occurrence noted at 51) to call _env_positive_int("CHAT_HISTORY_MAX_MESSAGES", 20) and _env_positive_int("HONCHO_CONTEXT_MAX_CHARS", 6000) (or equivalent validated parsing) so invalid env values are handled gracefully.
164-170: Consider adding type hint for return value.The function has good logic but lacks a return type hint for consistency with the rest of the codebase.
♻️ Proposed fix
-def _format_peer_context(ctx: PeerContextResponse) -> str: +def _format_peer_context(ctx: PeerContextResponse) -> str: parts: list[str] = [] if ctx.representation: parts.append(ctx.representation) if ctx.peer_card: parts.append("\n".join(ctx.peer_card)) - return "\n\n".join(parts) if parts else "(No memory yet.)" + return "\n\n".join(parts) if parts else "(No memory yet.)"(Already has return type - this is correct as-is)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@chatbot.py` around lines 164 - 170, Add an explicit return type hint to the _format_peer_context function so it matches the codebase style: ensure the signature reads with a return annotation (-> str) for the function that takes PeerContextResponse and returns the formatted string; update the function declaration for _format_peer_context to include this return type if it's missing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@chatbot.py`:
- Around line 37-38: The environment integer parsing for
CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS currently uses direct
int() calls which can raise ValueError for non-numeric env values; update these
to use the helper function _env_positive_int (or implement a small wrapper that
validates/parses and falls back to the default) and ensure _env_positive_int is
defined above where CHAT_HISTORY_MAX_MESSAGES and HONCHO_CONTEXT_MAX_CHARS are
set; adjust both occurrences (lines setting CHAT_HISTORY_MAX_MESSAGES and
HONCHO_CONTEXT_MAX_CHARS and the similar occurrence noted at 51) to call
_env_positive_int("CHAT_HISTORY_MAX_MESSAGES", 20) and
_env_positive_int("HONCHO_CONTEXT_MAX_CHARS", 6000) (or equivalent validated
parsing) so invalid env values are handled gracefully.
- Around line 164-170: Add an explicit return type hint to the
_format_peer_context function so it matches the codebase style: ensure the
signature reads with a return annotation (-> str) for the function that takes
PeerContextResponse and returns the formatted string; update the function
declaration for _format_peer_context to include this return type if it's
missing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 53e0cbaa-ce3e-43fa-8956-e378a7d078e7
📒 Files selected for processing (2)
README.mdchatbot.py
|
This is fun, but not relevant to being merged into honcho itself. Would publish this as a standalone repo. Can you say more about the intention of where this should fit into the repo? |
What I Built
A conversational chatbot that uses Honcho for persistent memory.
Features
How to Run
Summary by CodeRabbit
New Features
Documentation