Documentation:
ai-sdk.dev/docs/agents...
Learn how to add memory to your agents.
Connect to any Open Responses compatible API.
Learn how to build subagents with the AI SDK.
ai-sdk.dev/docs/agents...
Thank you to everyone who contributed to AI SDK 6.
Read the full AI SDK 6 announcement:
vercel.com/blog/ai-sdk-6
Upgrading from AI SDK 5? Run codemods to migrate automatically with minimal code changes.
`generateImage` now supports image editing by accepting reference images alongside your text prompt.
Native support for reranking with the new `rerank` function.
Reorder search results based on relevance to pass only the most relevant documents to the model.
DevTools gives you full visibility into your LLM calls and agents.
Inspect each step of any call including input, output, model configuration, token usage, timing, and raw provider requests.
Combining tool calling with structured output no longer requires chaining generateText and generateObject together.
This makes it simple to build agents with structured output generation at the end.
Full MCP support is now stable in AI SDK 6.
HTTP transport, OAuth authentication, resources, elicitation, and experimental support for prompts.
Tools now natively support human approval before execution with `needsApproval`.
Pass a function to dynamically require approval.
The new Agent abstraction (ToolLoopAgent) lets you define your agent once and use it everywhere.
Your agent definition becomes the single source of truth for end-to-end type safety, from tools to UI components.
AI SDK 6
Introducing agents, tool execution approval, full MCP support, tool calling with structured output, DevTools, reranking, standard JSON schema support, provider tools, image editing, and so much more.
vercel.com/blog/ai-sdk-6
With programmatic tool calling, Claude can call your tools from a code execution environment, keeping intermediate results out of context.
It can significantly reduce token usage and cost.
Access your context from within the onFinish callback.
// @ai-sdk/anthropic@3.0.0-beta.77 - Context management support import { anthropic, AnthropicProviderOptions } from "@ai-sdk/anthropic"; import { generateText } from "ai"; const result = await generateText({ model: anthropic("claude-sonnet-4-5"), messages, providerOptions: { anthropic: { contextManagement: { edits: [ { type: "clear_tool_uses_20250919", trigger: { type: "input_tokens", value: 10000 }, keep: { type: "tool_uses", value: 5 }, clearAtLeast: { type: "input_tokens", value: 1000 }, clearToolInputs: true, excludeTools: ["important_tool"], }, ], }, } satisfies AnthropicProviderOptions, }, });
Automatically clear conversation history when approaching token limits while preserving recent context with Anthropic.
Use OpenAI's apply patch tool to let GPT-5.2 create, update, and delete files using structured diffs.
Use Anthropic's tool search to give your agent hundreds of tools without filling its context window.
Learn how to build type-safe applications on top of your agents with @nicoalbanese10 from the AI SDK core team.
www.youtube.com/watch?v=ZRs...
20,000 stars on GitHub.
Thank you to everyone building with the AI SDK.