Conversation
- Replace simple tool presence check with detailed tool source tracking - Add new handleToolCalls function to manage mixed tool scenarios - Implement proper tool merging logic with server priority - Add comprehensive tests for tool merging and handling
Add support for streaming client tool calls as soon as they are received instead of buffering them. This improves responsiveness when handling client tools in the chat stream. - Add new fields to track streamed tool calls and allowed indexes - Implement filtering for client tool calls - Flush non-finish chunks before streaming tool calls - Add test case for immediate streaming of client tools
Implement recording of AI chat requests and responses for both streaming and non-streaming scenarios Add support for recording transaction ID, provider name, and call index Include tests for recording functionality and environment variable handling
The condition for writing stream events was too restrictive, preventing usage information from being sent when there were no choices but token usage was available. This change ensures usage data is properly transmitted in such cases.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the tool handling within the LLM bridge to support a more sophisticated tool management strategy. It enables the system to merge tools originating from both the server configuration and client requests, ensuring server-defined tools take precedence. A key improvement is the ability to intelligently process these tools: executing server-side tools internally while correctly filtering and streaming client-intended tool calls back to the client. Additionally, a new response recording feature has been integrated to aid in debugging and observability of AI interactions. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #1203 +/- ##
==========================================
+ Coverage 48.09% 48.55% +0.46%
==========================================
Files 93 94 +1
Lines 5510 5756 +246
==========================================
+ Hits 2650 2795 +145
- Misses 2664 2737 +73
- Partials 196 224 +28 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant feature to merge server-side and client-side tools for LLM function calling. It correctly prioritizes server-side tools in case of conflicts. The logic to handle different tool call scenarios (server-only, client-only, mixed) is well-implemented, especially for streaming responses. Additionally, a new recording mechanism has been added to trace requests and responses, which is very useful for debugging.
I've identified a couple of areas for improvement. One is a minor code duplication that can be refactored for better readability. The other is a more critical issue where client-side tool calls seem to be handled incorrectly for /invoke requests, potentially leading to lost information in the response. The new functionality is well-tested with new test files covering various scenarios.
| case *invokeResp: | ||
| return r.writeResponse(w, chatCtx) | ||
| } |
There was a problem hiding this comment.
The handling of *invokeResp in this function appears to be incorrect when client-side tool calls are present. The case *invokeResp calls r.writeResponse(w, chatCtx), which generates an ai.InvokeResponse. This response type does not include tool calls, so they will be dropped from the final response sent to the client. This will likely result in the client receiving an empty or incomplete response.
If the /invoke endpoint is intended to support client-side tool calls, the response mechanism needs to be adjusted to include them. One option is to return a full openai.ChatCompletionResponse in this case, similar to the *chatResp case.
If client-side tools are not supported for /invoke, then an error should probably be returned here to indicate an invalid request configuration.
| if hasServerTool && hasClientTool { | ||
| // Mixed case: discard server tools, only pass client tools | ||
| if reqStream { | ||
| w.SetStreamHeader() | ||
| w.Flush() | ||
| } | ||
| err := writeClientToolCallsResponse(w, chatCtx, resp, clientToolCalls) | ||
| respSpan.End() | ||
| if err != nil { | ||
| return false, err | ||
| } | ||
| return false, nil | ||
| } else if hasClientTool { | ||
| // Only client tools: pass through to client | ||
| if reqStream { | ||
| w.SetStreamHeader() | ||
| w.Flush() | ||
| } | ||
| err := writeClientToolCallsResponse(w, chatCtx, resp, clientToolCalls) | ||
| respSpan.End() | ||
| if err != nil { | ||
| return false, err | ||
| } | ||
| return false, nil | ||
| } else { | ||
| // Only server tools: execute them | ||
| if err := doToolCall(ctx, chatCtx, toolCalls, w, caller, tracer, reqStream, transID, agentContext); err != nil { | ||
| return false, err | ||
| } | ||
| return true, nil | ||
| } |
There was a problem hiding this comment.
The logic in the if hasServerTool && hasClientTool and else if hasClientTool blocks is nearly identical. This duplication can be removed by combining them into a single if hasClientTool block, which would improve code readability and maintainability.
if hasClientTool {
// Mixed or client-only case: pass client tools to client
if reqStream {
w.SetStreamHeader()
w.Flush()
}
err := writeClientToolCallsResponse(w, chatCtx, resp, clientToolCalls)
respSpan.End()
if err != nil {
return false, err
}
return false, nil
}
// Only server tools: execute them
if err := doToolCall(ctx, chatCtx, toolCalls, w, caller, tracer, reqStream, transID, agentContext); err != nil {
return false, err
}
return true, nil…ew in non-stream mode Fix a bug where the gemini-3.1-flash-lite-preview model in non-stream mode requires non-user messages to have the "model" role instead of their original roles
No description provided.