Skip to content

Create an AI SDK Glossary (ubiquitous glossary) #14234

@gr2m

Description

@gr2m

we want do specify terms that we use throughout the documentation and code base.

Some terms we want to start with

More suggestions generated with the ubiquitous-language skill by Matt Pocock

AI SDK Glossary

Domain terminology for the Vercel AI SDK. Use these terms consistently in code, documentation, and conversation.

Term Definition Aliases to avoid
AI Function A user-facing function that orchestrates a model call (e.g., generateText, streamText) Helper, wrapper, utility
Provider Model Api Client ... ...
Model ... ...
Model ID A provider-specific string identifying a particular model variant (e.g., "gpt-4o", "claude-3-opus") Model name, model identifier
Model Specification An interface that defines the contract a provider must implement for a given model type (e.g., LanguageModelV4) Model interface, model API, model contract
Provider A package that implements one or more model specifications for a specific AI service (e.g., @ai-sdk/openai) Adapter, connector, integration, client
Provider Options An opaque key-value object passed through to a specific provider without SDK interpretation Provider metadata, provider config, extra options

Prompts & Messages

Term Definition Aliases to avoid
Prompt The complete input to an AI function, combining an optional system message with either a prompt string or a messages array Input, request, query
Message A discrete unit of communication in a prompt, with a role and content Turn, entry, chat message
System Message A message that defines the model's behavior, personality, or role System prompt, instructions
User Message A message from the human user, which can contain text, images, or files Human message, input message
Assistant Message A message previously generated by the model, containing text, reasoning, tool calls, or tool results AI message, bot message, model message
Tool Message A message containing the result of a tool execution, sent back to the model Function result message
Content Part A typed segment within a message (text, image, file, tool call, tool result, reasoning) Block, chunk, element

Tools

Term Definition Aliases to avoid
Tool A function the model can invoke, defined with a description and input schema Function, action, plugin, capability
Tool Call An instance of the model requesting to invoke a tool, with a name, arguments, and unique ID Function call, tool invocation, tool use
Tool Call ID A unique identifier linking a tool call to its result Call ID, invocation ID
Tool Result The output returned after executing a tool call, sent back to the model Function result, tool output, tool response
Tool Choice Configuration controlling how the model selects tools: auto, none, required, or a specific tool name Function calling mode
Dynamic Tool A tool defined at runtime rather than declared upfront (e.g., MCP tools) Runtime tool
Provider-Executed Tool A tool that the provider executes on its server rather than returning the call to the client Server-side tool, remote tool

Structured Output

Term Definition Aliases to avoid
Schema A formal specification of expected data structure that constrains model output (can be JSON Schema, Zod, or Standard Schema) Type, shape, format, template
Output Configuration for how to parse the model's response: text (default), object, array, or enum Response format, return type
Strict Mode A tool or schema setting that forces the model to strictly adhere to the schema, potentially limiting supported schema features Exact mode, validated mode

Streaming

Term Definition Aliases to avoid
Stream An incremental sequence of typed chunks emitted as the model generates output Feed, subscription, channel
Stream Part A single typed event in a stream, discriminated by type (e.g., text-delta, tool-call, finish) Chunk, event, frame, packet
Delta An incremental text or tool-input fragment within a stream Diff, partial, fragment
Finish The terminal stream part signaling generation is complete, carrying usage and finish reason Done, end, complete

Generation Flow

Term Definition Aliases to avoid
Step A single model invocation within an agentic loop; a multi-step generation contains one step per round trip Turn, iteration, round, cycle
Finish Reason The reason a model stopped generating: stop, length, content-filter, tool-calls, error, or other Stop reason, end reason, completion reason
Context User-defined state that flows through generation steps (e.g., user info, tool execution state) State, metadata, payload
Call Warning A non-fatal issue during a model call (e.g., unsupported setting); the call still succeeds Deprecation notice, soft error

Tokens & Usage

Term Definition Aliases to avoid
Token The atomic unit of text that models consume (input) or produce (output) Word, character, unit
Usage Token accounting for a model call: input tokens, output tokens, and their breakdowns Cost, consumption, metrics
Input Tokens Tokens in the prompt/context sent to the model Prompt tokens
Output Tokens Tokens in the model's generated response Completion tokens
Reasoning Tokens Output tokens consumed by the model's extended reasoning process (e.g., o1/o3 models) Thinking tokens
Cached Tokens Input tokens served from a provider's cache, typically at reduced cost Cache hits

Middleware

Term Definition Aliases to avoid
Middleware A composable layer that intercepts and transforms model calls via transformParams, wrapGenerate, and wrapStream Plugin, interceptor, hook, decorator

Reasoning

Term Definition Aliases to avoid
Reasoning Explicit thinking text from models that shows the model's thought process before responding Thinking, chain-of-thought, inner monologue
Reasoning Effort A parameter controlling a model's reasoning depth: provider-default, none, minimal, low, medium, high, xhigh Think level, reasoning budget

Relationships

  • An AI Function uses exactly one Model Specification type
  • A Provider implements one or more Model Specifications
  • A Prompt contains an optional System Message and either a prompt string or an array of Messages
  • A Message contains one or more Content Parts
  • A Tool Call produces exactly one Tool Result
  • A Step contains zero or more Tool Calls and ends with a Finish Reason
  • Middleware wraps a Language Model to intercept doGenerate and doStream calls
  • Usage is reported per Step and aggregated across all steps

Ambiguities

  • "model" is used to mean both the abstract specification interface (LanguageModelV4) and a concrete provider implementation (OpenAIChatLanguageModel). When precision matters, use Model Specification for the interface and Provider (or the specific class name) for the implementation.
  • "prompt" can refer to both the complete input object (system + messages/text) and the simple prompt string parameter. Use Prompt for the complete input and "prompt string" or "text prompt" when referring to the single-string shorthand.
  • "output" can mean the model's raw text response or the output configuration parameter (text/object/array/enum). Use Output for the configuration and "generated text" or "response" for what the model produces.
  • "context" can mean the user-defined state passed through steps or the model's context window (token limit). Use Context for the user-defined state object and "context window" for the token limit.
  • "completion" appears in token accounting ("completion tokens") but should be avoided — use Output Tokens instead. The SDK does not use "completion" as a generation concept.

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationmaintenanceCI, internal documentation, automations, etc

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions