Conversation
This comment was marked as resolved.
This comment was marked as resolved.
|
The snapshot build published (https://github.com/vercel/ai/actions/runs/21530986703/job/62046593364) published |
|
@gr2m I did some work on improving this from the workflow side, might be relevant to you? vercel/workflow#928 Still not full compatibility, but we got stuck on the same issues when trying to port a v6 app to use workflow. |
|
@gr2m I think this is the right approach, avoiding a lot of compatibility layers. However, it seems that the code is copied more or less 1:1 from the workflow code, which has simplified lots of things. For example, OpenAI's I would offer to help, but I assume you already have own ideas how things should work (and I know little about the AI SDK internals). Anyway, if I can do anything, I'm happy to help. |
|
@KaiKloepfer thanks will have a look! @rovo89 I'm focused on #12381 right now, please feel free to send PRs for exploration of different approaches. |
|
My current attempt is to simpy use ToolLoopAgent in a step function. 🙈 export async function chat(writable: WritableStream<UIMessageChunk>, messages: UIMessage[]) {
'use step';
const agent = new ToolLoopAgent({...});
const stream = await createAgentUIStream({
agent,
uiMessages: messages,
})
await stream.pipeTo(writable)
}Streaming works fine and I can use it exactly like I'm used to. DurableAgent has quite some limitations:
But of course, it has benefits. As far as I understood:
Some ideas how similar features could be achieved in (a subclass of) ToolLoopAgent:
|
The globalThis.AI_SDK_DEFAULT_PROVIDER accesses are already properly typed via declare global in src/global.ts. Restore @ts-expect-error for the experimental videoModel access (preferred over @ts-ignore). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These files were added during development and should not be merged. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
experimental_output is deprecated in the AI SDK. Use the non-experimental output parameter and property name instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The internal entry point re-exports resolveLanguageModel but didn't have the globalThis type augmentation in scope, causing DTS build failures for AI_SDK_DEFAULT_PROVIDER. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| * Merges two optional callbacks into one that calls both sequentially. | ||
| * The first callback (from constructor/settings) runs before the second (from method). | ||
| */ | ||
| export function mergeCallbacks< |
There was a problem hiding this comment.
as in packages/provider-utils or packages/ai/src/util where it is now?
https://github.com/vercel/ai/blob/4e8fa0e3bb478b72b243a972e53c5834517b3159/packages/ai/src/util/merge-listeners.ts
| // private, making it invisible to external type checks. The static | ||
| // WORKFLOW_SERIALIZE method has runtime access to the field regardless. | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| export function serializeModel(inst: any): { |
There was a problem hiding this comment.
should this be typed to languagemodelv4 since the docs say it is for a language model (also should the param be called model)
There was a problem hiding this comment.
.config is a private implementation detail. We have access to it at runtime but typescript has no way to express that. Hence the any.
| * ``` | ||
| */ | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| export function deserializeModelConfig<T>(config: T): T { |
There was a problem hiding this comment.
alternative: deserialize model that accepts model class prototype and config and then returns the model
(seems better if possible because then serializeModel / deserializeModel are clear opposites)
There was a problem hiding this comment.
Done in c11b83e — added deserializeModel(ModelClass, options) as the symmetric opposite of serializeModel, and updated all provider model classes to use it.
| return result; | ||
| } | ||
|
|
||
| function isSerializable(value: unknown): boolean { |
There was a problem hiding this comment.
Yes some models are not serializable because of async headers methods
I mentioned this in the PR description
Remaining work from Phase 3: async headers providers. Four providers have async
getHeaderswhich can't be resolved synchronously at serialization time. These need per-provider handling or a model factory function workaround:
- Gateway — async OIDC token resolution (
AI_GATEWAY_API_KEYenv var fallback)- Amazon Bedrock (anthropic subprovider) — async SigV4 credential loading (
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEYenv vars)- KlingAI — async JWT generation from
KLINGAI_ACCESS_KEY/KLINGAI_SECRET_KEYenv vars- Google Vertex — async
Resolvableheaders (GOOGLE_VERTEX_API_KEYenv var for express mode)
The workflow team is looking into support of async serialize methods which will unblock support for these providers.
Resolved conflict in tool-loop-agent.ts: adopted main's rename of mergeCallbacks to mergeListeners throughout. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ges/ai/src/model/resolve-model.ts`
Introduces `deserializeModel(ModelClass, options)` which accepts a model
class constructor and the serialized `{ modelId, config }` payload,
creating a clean `serializeModel` / `deserializeModel` symmetry.
Updated all 58 provider model classes to use the new helper in their
`WORKFLOW_DESERIALIZE` implementations. `deserializeModelConfig` is kept
exported for the one non-standard case (GoogleGenerativeAIImageModel
with a 3-arg constructor) and backward compatibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… support (#14340) ## Background The `WorkflowChatTransport` class was implemented in `@ai-sdk/workflow` but not exported, making it unavailable to consumers. This transport is needed for `useChat` to enable automatic stream reconnection in workflow-based chat apps — handling network failures, page refreshes, and function timeouts by reconnecting to the workflow's stream endpoint. Reference: [DurableChatTransport](https://useworkflow.dev/docs/api-reference/workflow-ai/workflow-chat-transport) ## Summary - Export `WorkflowChatTransport` class and related types (`WorkflowChatTransportOptions`, `SendMessagesOptions`, `ReconnectToStreamOptions`) from `@ai-sdk/workflow` - Add `initialStartIndex` option for resuming streams from the tail (negative values like `-50` fetch only the last 50 chunks, useful for page refresh recovery without replaying the full conversation) - Implement `x-workflow-stream-tail-index` header resolution to compute absolute chunk positions from negative start indices, with graceful fallback to replay-from-start when the header is missing - Fix positive `startIndex` reconnection: set `chunkIndex` to match the explicit start position so retries after disconnection resume correctly - Add `startIndex` per-call override on `ReconnectToStreamOptions` - Extract `getErrorMessage` utility for proper error formatting in reconnection failures (avoids `[object Object]` in error messages) - Update `examples/next-workflow` main page to use `WorkflowChatTransport` with `useChat` - Add `examples/next-workflow/test` page with mock API routes that simulate stream interruption and verify reconnection recovery end-to-end ## Documentation - **API reference**: New `WorkflowChatTransport` reference page at `docs/reference/ai-sdk-workflow/workflow-chat-transport` with constructor parameters, methods, reconnection flow, server requirements, and examples - **Workflow agent guide**: New "Resumable Streaming" section with client and server endpoint examples - **Transport guide**: New "Workflow Transport" section linking to the reference - **Workflow reference index**: Added `WorkflowChatTransport` card ## Manual Verification 1. Started `examples/next-workflow` dev server (`pnpm next dev`) 2. **Happy path** (`/`): Sent "What is the weather in San Francisco?" — WorkflowAgent called `getWeather` tool, responded with "86°F and windy". Sent "What is 42 * 17?" — called `calculate` tool, responded "714". Both messages used `WorkflowChatTransport`. 3. **Stream interruption + reconnection** (`/test`): The test page uses mock API routes where the POST endpoint sends only 2 of 6 SSE chunks (no `finish` event), simulating a function timeout. The transport detected the missing `finish` chunk, automatically reconnected via GET to `/api/test-chat/{runId}/stream?startIndex=2`, received the remaining chunks, and displayed the complete message. The transport log panel confirmed the full lifecycle: - `POST response received` (`onChatSendMessage` callback fired) - `Status: streaming` (partial stream consumed) - Auto-reconnect via GET (transparent to the user) - `Chat ended: total chunks=6` (`onChatEnd` callback fired) - `Status: ready` ## Checklist - [x] Tests have been added / updated (for bug fixes / features) - [x] Documentation has been added / updated (for bug fixes / features) - [x] A _patch_ changeset for relevant packages has been added (for bug fixes / features - run `pnpm changeset` in the project root) - [x] I have reviewed this pull request (self-review) ## Related Issues Follow-up to #12165 --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Create a new
@ai-sdk/workflowpackage that exportsWorkflowAgent, which will be the successor of DurableAgentToolLoopAgent parity plan
The underlying streamText in core already supports all 6 callback types. The gap is that WorkflowAgent doesn't accept or pass them through. However, WorkflowAgent doesn't call streamText directly — it uses streamTextIterator → doStreamStep → streamModelCall. So the callbacks need to be threaded through that chain.
Phase 1: Wire missing callbacks through WorkflowAgent API ✅ (#14036)
Add the 4 missing callback types to WorkflowAgentOptions and WorkflowAgentStreamOptions interfacesAdd mergeCallbacks utility (extracted from ToolLoopAgent pattern)Pass callbacks through streamTextIterator to doStreamStep (which uses streamModelCall)Emit callbacks at the right points in the iterator loopUnblocked 14 of 16 GAP tests. 2 remain as
it.fails()to track event shape parity (see below).Remaining work from Phase 1: Align callback event shapes with ToolLoopAgent. WorkflowAgent's callback events are simpler than ToolLoopAgent's. ToolLoopAgent events (defined in
core-events.ts) includecallId,provider,modelId,stepNumber,messages,abortSignal,functionId,metadata,experimental_context, typedtoolCallwithTypedToolCall<TOOLS>, anddurationMson tool call finish. WorkflowAgent events currently only provide a subset (e.g.,onToolCallStartonly hastoolCallwith a plainToolCalltype). Once the event shapes converge, the callback types could be unified as sharedAgentOnStartCallback,AgentOnStepStartCallback, etc. instead of separateWorkflowAgent*andToolLoopAgent*types.Phase 2: Add prepareCall support ✅ (#14037)
Add prepareCall to WorkflowAgentOptionsCall it in stream() before the iterator, similar to ToolLoopAgent's prepareCall()Unblocked 1 GAP test.
Remaining work from Phase 2: ToolLoopAgent's
prepareCallalso supportsstopWhen,activeTools, andexperimental_downloadin its input/output types — these are not yet in WorkflowAgent'sPrepareCallOptions/PrepareCallResult. Additionally, ToolLoopAgent supports typedCALL_OPTIONSthat flow throughprepareCallasoptions— WorkflowAgent doesn't have this concept.Phase 3: Add workflow serialization support to all provider models ✅ (#13779)
Adds
WORKFLOW_SERIALIZE/WORKFLOW_DESERIALIZEto all 59 provider model classes (language, image, embedding, speech, transcription, video). AddsserializeModel()anddeserializeModelConfig()helpers to@ai-sdk/provider-utils:serializeModelresolvesconfig.headers()at serialization time so auth credentials survive the step boundary as plain key-value objectsdeserializeModelConfigwraps plain-object headers back into a function on deserializationMakes
headersoptional in all provider config types so deserialized models work without pre-configured auth. Includes documentation for third-party provider authors.Remaining work from Phase 3: async headers providers. Four providers have async
getHeaderswhich can't be resolved synchronously at serialization time. These need per-provider handling or a model factory function workaround:AI_GATEWAY_API_KEYenv var fallback)AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEYenv vars)KLINGAI_ACCESS_KEY/KLINGAI_SECRET_KEYenv varsResolvableheaders (GOOGLE_VERTEX_API_KEYenv var for express mode)Phase 4: Add needsApproval support ✅ (#14084)
Before executing a tool, check tool.needsApproval (boolean or async function)If approval needed, pause the loop and return pending tool calls (like client-side tools)Handle approval resumption: collect tool-approval-response parts, execute approved tools, create denial resultsWrite tool results and step boundaries to the UI stream so tool parts transition to output-available state and convertToModelMessages produces correct message structure for multi-turn conversationsThis unblocked 2 GAP tests.
Phase 5: Telemetry integration listeners
This unblocks 3 GAP tests.
Phase 6: Clean up duplication
Extract shared mergeCallbacks utility✅ (done in feat(workflow): add onStart, onStepStart, onToolCallStart, onToolCallFinish callbacks #14036 — moved toai/internal, used by both ToolLoopAgent and WorkflowAgent)Future work
Done
Extracts UIMessageChunk conversion from
doStreamStepinto a standalone utility, making the model streaming layer independent of UI concerns.doStreamStepreturns rawLanguageModelV4StreamPart[]chunks; UIMessageChunk conversion is a separate, optional step.writablebecomes optional inWorkflowAgentStreamOptions— when omitted, the agent streams ModelMessages only. FollowsstreamText'stoUIMessageStream()pattern.experimental_streamModelCallin doStreamStep (refactor: use experimental_streamModelCall in doStreamStep #13820)Replace doStreamStep internals with
experimental_streamModelCall. Eliminates ~300 lines of duplicated stream transformation, gains tool call parsing/repair, retry logic, andExperimental_ModelCallStreamPartstream types.mergeAbortSignalsfromai/internal(refactor: use shared mergeAbortSignals from ai/internal in WorkflowAgent #13616)Exports the existing
mergeAbortSignalsutility fromai/internaland replaces the manual ~25-line abort signal + timeout merging code in WorkflowAgent with the shared utility. UsesAbortSignal.timeout()instead of manualsetTimeout+AbortController, matching howgenerateText/streamTexthandle the same concern.Adds
experimental_onStart,experimental_onStepStart,experimental_onToolCallStart,experimental_onToolCallFinishcallbacks to WorkflowAgent. ExtractsmergeCallbacksintoai/internalas shared utility used by both ToolLoopAgent and WorkflowAgent. Also fixessideEffects: falsebreaking workflow step discovery and replacesresolveLanguageModelfromai/internalwithgatewayfromaito fix Next.js webpack resolution in step bundles.Adds
prepareCallcallback to WorkflowAgentOptions, called once before the agent loop to transform model, instructions, generation settings, etc.toolsexcluded from return type since they're bound at construction time for type safety.No longer pursued
Stream-to-StepResult aggregator (Extract chunksToStepResult as shared utility in ai/internal #13459)chunksToStepwas removed whenexperimental_streamModelCallwas adopted in refactor: use experimental_streamModelCall in doStreamStep #13820.Public message conversion (step → messages for next turn) (Export toResponseMessages from ai/internal #13474)toolsToModelToolswas removed whenexperimental_streamModelCallwas adopted in refactor: use experimental_streamModelCall in doStreamStep #13820.Tool descriptor extraction (Extract getToolDescriptors as shared utility in ai/internal #13477)toolsToModelTools()was removed with thestreamModelCallrefactor.Extract V3→UIMessageChunk stream transform as shared utility (Extract V3→UIMessageChunk transform as shared utility in ai/internal #13433)Superseded by the
streamModelCallrefactor which eliminated the V3 stream layer.String-based model IDs in core functionsAlready supported —
LanguageModeltype accepts strings, all core functions resolve them via the global provider.Declarative stop conditionsNot necessary —
stopWhenruns in the agent's control loop outside step boundaries, so function-based conditions work without serialization.maxStepscovers the simple case as a serializable number.Notes
repairToolCallnot serializable across step boundaries.ToolCallRepairFunctionis a function and can't cross the'use step'serialization boundary. Left out of WorkflowAgent for now —experimental_streamModelCallhandles repair internally when called outside a step boundary.abortSignalserialization.AbortSignalobjects can't be serialized across step boundaries. The Workflow team is working on adding serialization support for abort signals.zod. Serialization is lossy.Related issues