Skip to content

Feat/fix vertex chunk parsing instability#3606

Open
hongshi1 wants to merge 2 commits intoalibaba:mainfrom
hongshi1:feat/fix-vertex-chunk-parsing-instability
Open

Feat/fix vertex chunk parsing instability#3606
hongshi1 wants to merge 2 commits intoalibaba:mainfrom
hongshi1:feat/fix-vertex-chunk-parsing-instability

Conversation

@hongshi1
Copy link
Copy Markdown

Summary

This PR hardens Vertex streaming response parsing in ai-proxy.

Previously, Vertex streaming handling parsed each incoming chunk directly by splitting on newlines and unmarshalling line-by-line. That was fragile when an upstream SSE data: event was split across multiple network chunks. It could also drop the final payload because the last chunk returned [DONE] immediately.

Changes

  • add buffered SSE data-line extraction helpers for streaming parsing
  • make Vertex streaming response handling support split/incomplete chunks safely
  • keep the final payload conversion on the last chunk, then append data: [DONE]
  • add provider-level regression tests for:
    • split chunk reconstruction
    • final chunk payload preservation before [DONE]

Verification

go test ./provider -run 'TestVertexStreamingResponseBody|TestVertexStreamingResponseBodyBuffersSplitChunks|TestVertexStreamingResponseBodyKeepsFinalPayloadBeforeDone'
go test -run TestVertex ./...

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


gexuancheng seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants