Parent tracking issue: #22702
Context
Currently, in the stable useGeminiStream flow, the UI explicitly parses slash commands (like /skill) and @-mentions, explicitly schedules the resulting tool calls, and manually injects synthetic turns into the LLM history.
To achieve a true "dumb terminal" TUI, the UI should simply send the user's raw text string to the AgentProtocol. The underlying agent implementation (e.g., LegacyAgentSession or its internal prompt router) should be responsible for intercepting command syntax, executing the necessary tools, managing its internal history, and emitting the appropriate tool_request / tool_response events back to the UI.
Note on Autocomplete: For now, the UI will continue to handle tab-completion and autocomplete suggestions for these commands as it does today. In the future, this capability should also be moved to the protocol level.
Tasks
Relevant Files
- `packages/core/src/agent/legacy-agent-session.ts`: Implement interception, execution, and history injection logic.
- `packages/cli/src/ui/hooks/useAgentStream.ts`: Simplify input processing to pass raw text.
- (Various existing command processors in `packages/cli/src/ui/hooks/` may need to be migrated to `packages/core`).
Acceptance Criteria
- Sending a string like `/skill [name]` via `agent.send()` results in the agent emitting tool lifecycle events for the `activate_skill` tool.
- The UI properly renders these synthetic events.
- The LLM context correctly reflects the outcome of the command in subsequent turns.
- Autocomplete continues to function in the TUI as it does today.
- The stable `useGeminiStream` flow continues to function without changes to its user-facing behavior.
Parent tracking issue: #22702
Context
Currently, in the stable
useGeminiStreamflow, the UI explicitly parses slash commands (like/skill) and@-mentions, explicitly schedules the resulting tool calls, and manually injects synthetic turns into the LLM history.To achieve a true "dumb terminal" TUI, the UI should simply send the user's raw text string to the
AgentProtocol. The underlying agent implementation (e.g.,LegacyAgentSessionor its internal prompt router) should be responsible for intercepting command syntax, executing the necessary tools, managing its internal history, and emitting the appropriatetool_request/tool_responseevents back to the UI.Note on Autocomplete: For now, the UI will continue to handle tab-completion and autocomplete suggestions for these commands as it does today. In the future, this capability should also be moved to the protocol level.
Tasks
@-mentionsout of the UI hooks and into the core agent logic (e.g., intercepting the initial text inLegacyAgentProtocol._runLoop)./helpor/clear) should remain handled by the UI.LegacyAgentSession: When a command string is intercepted, the session should execute the corresponding tool directly via theScheduler.tool_requestandtool_responseevents for these intercepted commands. Ensure these events are flagged (e.g.,isClientInitiated: truein the meta payload) so the UI can render them appropriately.LegacyAgentSessionautomatically injects the synthetic "model" and "user" turns into the underlying LLM history so subsequent requests have the correct context.useAgentStream.ts: Remove the explicit execution logic for backend commands, simply passing the raw text toagent.send({ message: ... }).packages/clitopackages/core, ensure the existing behavior and interfaces used by the stableuseGeminiStream.tsare strictly preserved without regressions.TODOcomment in the codebase tracking the future migration of command autocomplete/discovery to theAgentProtocol.Relevant Files
Acceptance Criteria