Anthropic (original) (raw)

Anthropic Provider

The Anthropic provider contains language model support for the Anthropic Messages API.

Setup

The Anthropic provider is available in the @ai-sdk/anthropic module. You can install it with

pnpm add @ai-sdk/anthropic

Provider Instance

You can import the default provider instance anthropic from @ai-sdk/anthropic:


import { anthropic } from '@ai-sdk/anthropic';

If you need a customized setup, you can import createAnthropic from @ai-sdk/anthropic and create a provider instance with your settings:


import { createAnthropic } from '@ai-sdk/anthropic';

const anthropic = createAnthropic({

  // custom settings

});

You can use the following optional settings to customize the Anthropic provider instance:

Language Models

You can create models that call the Anthropic Messages API using the provider instance. The first argument is the model id, e.g. claude-3-haiku-20240307. Some models have multi-modal capabilities.


const model = anthropic('claude-3-haiku-20240307');

You can also use the following aliases for model creation:

You can use Anthropic language models to generate text with the generateText function:


import { anthropic } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const { text } = await generateText({

  model: anthropic('claude-3-haiku-20240307'),

  prompt: 'Write a vegetarian lasagna recipe for 4 people.',

});

Anthropic language models can also be used in the streamText function and support structured data generation with Output(see AI SDK Core).

The following optional provider options are available for Anthropic models:

Structured Outputs and Tool Input Streaming

Tool call streaming is enabled by default. You can opt out by setting thetoolStreaming provider option to false.


import { anthropic } from '@ai-sdk/anthropic';

import { streamText, tool } from 'ai';

import { z } from 'zod';

const result = streamText({

  model: anthropic('claude-sonnet-4-20250514'),

  tools: {

    writeFile: tool({

      description: 'Write content to a file',

      inputSchema: z.object({

        path: z.string(),

        content: z.string(),

      }),

      execute: async ({ path, content }) => {

        // Implementation

        return { success: true };

      },

    }),

  },

  prompt: 'Write a short story to story.txt',

});

Effort

Anthropic introduced an effort option with claude-opus-4-5 that affects thinking, text responses, and function calls. Effort defaults to high and you can set it to medium or low to save tokens and to lower time-to-last-token latency (TTLT).


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const { text, usage } = await generateText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'How many people will live in the world in 2040?',

  providerOptions: {

    anthropic: {

      effort: 'low',

    } satisfies AnthropicLanguageModelOptions,

  },

});

console.log(text); // resulting text

console.log(usage); // token usage

Fast Mode

Anthropic supports a speed option for claude-opus-4-6 that enables faster inference with approximately 2.5x faster output token speeds.


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const { text } = await generateText({

  model: anthropic('claude-opus-4-6'),

  prompt: 'Write a short poem about the sea.',

  providerOptions: {

    anthropic: {

      speed: 'fast',

    } satisfies AnthropicLanguageModelOptions,

  },

});

The speed option accepts 'fast' or 'standard' (default behavior).

Reasoning

Anthropic models support extended thinking, where Claude shows its reasoning process before providing a final answer.

Adaptive Thinking

For newer models (claude-sonnet-4-6, claude-opus-4-6, and later), use adaptive thinking. Claude automatically determines how much reasoning to use based on the complexity of the prompt.


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const { text, reasoningText, reasoning } = await generateText({

  model: anthropic('claude-opus-4-6'),

  prompt: 'How many people will live in the world in 2040?',

  providerOptions: {

    anthropic: {

      thinking: { type: 'adaptive' },

    } satisfies AnthropicLanguageModelOptions,

  },

});

console.log(reasoningText); // reasoning text

console.log(reasoning); // reasoning details including redacted reasoning

console.log(text); // text response

You can combine adaptive thinking with the effort option to control how much reasoning Claude uses:


const { text } = await generateText({

  model: anthropic('claude-opus-4-6'),

  prompt: 'Invent a new holiday and describe its traditions.',

  providerOptions: {

    anthropic: {

      thinking: { type: 'adaptive' },

      effort: 'max', // 'low' | 'medium' | 'high' | 'max'

    } satisfies AnthropicLanguageModelOptions,

  },

});

Budget-Based Thinking

For earlier models (claude-opus-4-20250514, claude-sonnet-4-20250514, claude-sonnet-4-5-20250929), use type: 'enabled' with an explicit token budget:


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const { text, reasoningText, reasoning } = await generateText({

  model: anthropic('claude-sonnet-4-5-20250929'),

  prompt: 'How many people will live in the world in 2040?',

  providerOptions: {

    anthropic: {

      thinking: { type: 'enabled', budgetTokens: 12000 },

    } satisfies AnthropicLanguageModelOptions,

  },

});

console.log(reasoningText); // reasoning text

console.log(reasoning); // reasoning details including redacted reasoning

console.log(text); // text response

See AI SDK UI: Chatbot for more details on how to integrate reasoning into your chatbot.

Context Management

Anthropic's Context Management feature allows you to automatically manage conversation context by clearing tool uses or thinking content when certain conditions are met. This helps optimize token usage and manage long conversations more efficiently.

You can configure context management using the contextManagement provider option:


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5-20250929'),

  prompt: 'Continue our conversation...',

  providerOptions: {

    anthropic: {

      contextManagement: {

        edits: [

          {

            type: 'clear_tool_uses_20250919',

            trigger: { type: 'input_tokens', value: 10000 },

            keep: { type: 'tool_uses', value: 5 },

            clearAtLeast: { type: 'input_tokens', value: 1000 },

            clearToolInputs: true,

            excludeTools: ['important_tool'],

          },

        ],

      },

    } satisfies AnthropicLanguageModelOptions,

  },

});

// Check what was cleared

console.log(result.providerMetadata?.anthropic?.contextManagement);

Context Editing

Context editing strategies selectively remove specific content types from earlier in the conversation to reduce token usage without losing the overall conversation flow.

Clear Tool Uses

The clear_tool_uses_20250919 edit type removes old tool call/result pairs from the conversation history:

Clear Thinking

The clear_thinking_20251015 edit type removes thinking/reasoning blocks from earlier turns, keeping only the most recent ones:


const result = await generateText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'Continue reasoning...',

  providerOptions: {

    anthropic: {

      thinking: { type: 'enabled', budgetTokens: 12000 },

      contextManagement: {

        edits: [

          {

            type: 'clear_thinking_20251015',

            keep: { type: 'thinking_turns', value: 2 },

          },

        ],

      },

    } satisfies AnthropicLanguageModelOptions,

  },

});

Compaction

The compact_20260112 edit type automatically summarizes earlier conversation context when token limits are reached. This is useful for long-running conversations where you want to preserve the essence of earlier exchanges while staying within token limits.


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { streamText } from 'ai';

const result = streamText({

  model: anthropic('claude-opus-4-6'),

  messages: conversationHistory,

  providerOptions: {

    anthropic: {

      contextManagement: {

        edits: [

          {

            type: 'compact_20260112',

            trigger: {

              type: 'input_tokens',

              value: 50000, // trigger compaction when input exceeds 50k tokens

            },

            instructions:

              'Summarize the conversation concisely, preserving key decisions and context.',

            pauseAfterCompaction: false,

          },

        ],

      },

    } satisfies AnthropicLanguageModelOptions,

  },

});

Configuration:

When compaction occurs, the model generates a summary of the earlier context. This summary appears as a text block with special provider metadata.

Detecting Compaction in Streams

When using streamText, you can detect compaction summaries by checking the providerMetadata on text-start events:


for await (const part of result.fullStream) {

  switch (part.type) {

    case 'text-start': {

      const isCompaction =

        part.providerMetadata?.anthropic?.type === 'compaction';

      if (isCompaction) {

        console.log('[COMPACTION SUMMARY START]');

      }

      break;

    }

    case 'text-delta': {

      process.stdout.write(part.text);

      break;

    }

  }

}

Compaction in UI Applications

When using useChat or other UI hooks, compaction summaries appear as regular text parts with providerMetadata. You can style them differently in your UI:


{

  message.parts.map((part, index) => {

    if (part.type === 'text') {

      const isCompaction =

        (part.providerMetadata?.anthropic as { type?: string } | undefined)

          ?.type === 'compaction';

      if (isCompaction) {

        return (

          <div

            key={index}

            className="bg-yellow-100 border-l-4 border-yellow-500 p-2"

          >

            <span className="font-bold">[Compaction Summary]</span>

            <div>{part.text}</div>

          </div>

        );

      }

      return <div key={index}>{part.text}</div>;

    }

  });

}

Applied Edits Metadata

After generation, you can check which edits were applied in the provider metadata:


const metadata = result.providerMetadata?.anthropic?.contextManagement;

if (metadata?.appliedEdits) {

  metadata.appliedEdits.forEach(edit => {

    if (edit.type === 'clear_tool_uses_20250919') {

      console.log(`Cleared ${edit.clearedToolUses} tool uses`);

      console.log(`Freed ${edit.clearedInputTokens} tokens`);

    } else if (edit.type === 'clear_thinking_20251015') {

      console.log(`Cleared ${edit.clearedThinkingTurns} thinking turns`);

      console.log(`Freed ${edit.clearedInputTokens} tokens`);

    } else if (edit.type === 'compact_20260112') {

      console.log('Compaction was applied');

    }

  });

}

For more details, see Anthropic's Context Management documentation.

Cache Control

In the messages and message parts, you can use the providerOptions property to set cache control breakpoints. You need to set the anthropic property in the providerOptions object to { cacheControl: { type: 'ephemeral' } } to set a cache control breakpoint.

The cache creation input tokens are then returned in the providerMetadata object for generateText, again under the anthropic property. When you use streamText, the response contains a promise that resolves to the metadata. Alternatively you can receive it in theonFinish callback.


import { anthropic } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const errorMessage = '... long error message ...';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  messages: [

    {

      role: 'user',

      content: [

        { type: 'text', text: 'You are a JavaScript expert.' },

        {

          type: 'text',

          text: `Error message: ${errorMessage}`,

          providerOptions: {

            anthropic: { cacheControl: { type: 'ephemeral' } },

          },

        },

        { type: 'text', text: 'Explain the error message.' },

      ],

    },

  ],

});

console.log(result.text);

console.log(result.providerMetadata?.anthropic);

// e.g. { cacheCreationInputTokens: 2118 }

You can also use cache control on system messages by providing multiple system messages at the head of your messages array:


const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  messages: [

    {

      role: 'system',

      content: 'Cached system message part',

      providerOptions: {

        anthropic: { cacheControl: { type: 'ephemeral' } },

      },

    },

    {

      role: 'system',

      content: 'Uncached system message part',

    },

    {

      role: 'user',

      content: 'User prompt',

    },

  ],

});

Cache control for tools:


const result = await generateText({

  model: anthropic('claude-haiku-4-5'),

  tools: {

    cityAttractions: tool({

      inputSchema: z.object({ city: z.string() }),

      providerOptions: {

        anthropic: {

          cacheControl: { type: 'ephemeral' },

        },

      },

    }),

  },

  messages: [

    {

      role: 'user',

      content: 'User prompt',

    },

  ],

});

Longer cache TTL

Anthropic also supports a longer 1-hour cache duration.

Here's an example:


const result = await generateText({

  model: anthropic('claude-haiku-4-5'),

  messages: [

    {

      role: 'user',

      content: [

        {

          type: 'text',

          text: 'Long cached message',

          providerOptions: {

            anthropic: {

              cacheControl: { type: 'ephemeral', ttl: '1h' },

            },

          },

        },

      ],

    },

  ],

});

Limitations

The minimum cacheable prompt length is:

Shorter prompts cannot be cached, even if marked with cacheControl. Any requests to cache fewer than this number of tokens will be processed without caching.

For more on prompt caching with Anthropic, see Anthropic's Cache Control documentation.

Because the UIMessage type (used by AI SDK UI hooks like useChat) does not support the providerOptions property, you can use convertToModelMessagesfirst before passing the messages to functions like generateText orstreamText. For more details on providerOptions usage, seehere.

Bash Tool

The Bash Tool allows running bash commands. Here's how to create and use it:


const bashTool = anthropic.tools.bash_20250124({

  execute: async ({ command, restart }) => {

    // Implement your bash command execution logic here

    // Return the result of the command execution

  },

});

Parameters:

Two versions are available: bash_20250124 (recommended) and bash_20241022. Only certain Claude versions are supported.

Memory Tool

The Memory Tool allows Claude to use a local memory, e.g. in the filesystem. Here's how to create it:


const memory = anthropic.tools.memory_20250818({

  execute: async action => {

    // Implement your memory command execution logic here

    // Return the result of the command execution

  },

});

Only certain Claude versions are supported.

Text Editor Tool

The Text Editor Tool provides functionality for viewing and editing text files.


const tools = {

  str_replace_based_edit_tool: anthropic.tools.textEditor_20250728({

    maxCharacters: 10000, // optional

    async execute({ command, path, old_str, new_str, insert_text }) {

      // ...

    },

  }),

} satisfies ToolSet;

Different models support different versions of the tool:

Note: textEditor_20250429 is deprecated. Use textEditor_20250728 instead.

Parameters:

Computer Tool

The Computer Tool enables control of keyboard and mouse actions on a computer:


const computerTool = anthropic.tools.computer_20251124({

  displayWidthPx: 1920,

  displayHeightPx: 1080,

  displayNumber: 0, // Optional, for X11 environments

  enableZoom: true, // Optional, enables the zoom action

  execute: async ({ action, coordinate, text, region }) => {

    // Implement your computer control logic here

    // Return the result of the action

    // Example code:

    switch (action) {

      case 'screenshot': {

        // multipart result:

        return {

          type: 'image',

          data: fs

            .readFileSync('./data/screenshot-editor.png')

            .toString('base64'),

        };

      }

      case 'zoom': {

        // region is [x1, y1, x2, y2] defining the area to zoom into

        return {

          type: 'image',

          data: fs.readFileSync('./data/zoomed-region.png').toString('base64'),

        };

      }

      default: {

        console.log('Action:', action);

        console.log('Coordinate:', coordinate);

        console.log('Text:', text);

        return `executed ${action}`;

      }

    }

  },

  // map to tool result content for LLM consumption:

  toModelOutput({ output }) {

    return typeof output === 'string'

      ? [{ type: 'text', text: output }]

      : [{ type: 'image', data: output.data, mediaType: 'image/png' }];

  },

});

Use computer_20251124 for Claude Opus 4.5 which supports the zoom action. Use computer_20250124 for Claude Sonnet 4.5, Haiku 4.5, Opus 4.1, Sonnet 4, Opus 4, and Sonnet 3.7.

Parameters:

Web Search Tool

Anthropic provides a provider-defined web search tool that gives Claude direct access to real-time web content, allowing it to answer questions with up-to-date information beyond its knowledge cutoff.

You can enable web search using the provider-defined web search tool:


import { anthropic } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const webSearchTool = anthropic.tools.webSearch_20250305({

  maxUses: 5,

});

const result = await generateText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'What are the latest developments in AI?',

  tools: {

    web_search: webSearchTool,

  },

});

Configuration Options

The web search tool supports several configuration options:


const webSearchTool = anthropic.tools.webSearch_20250305({

  maxUses: 3,

  allowedDomains: ['techcrunch.com', 'wired.com'],

  blockedDomains: ['example-spam-site.com'],

  userLocation: {

    type: 'approximate',

    country: 'US',

    region: 'California',

    city: 'San Francisco',

    timezone: 'America/Los_Angeles',

  },

});

const result = await generateText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'Find local news about technology',

  tools: {

    web_search: webSearchTool,

  },

});

Web Fetch Tool

Anthropic provides a provider-defined web fetch tool that allows Claude to retrieve content from specific URLs. This is useful when you want Claude to analyze or reference content from a particular webpage or document.

You can enable web fetch using the provider-defined web fetch tool:


import { anthropic } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const result = await generateText({

  model: anthropic('claude-sonnet-4-0'),

  prompt:

    'What is this page about? https://en.wikipedia.org/wiki/Maglemosian_culture',

  tools: {

    web_fetch: anthropic.tools.webFetch_20250910({ maxUses: 1 }),

  },

});

Anthropic provides provider-defined tool search tools that enable Claude to work with hundreds or thousands of tools by dynamically discovering and loading them on-demand. Instead of loading all tool definitions into the context window upfront, Claude searches your tool catalog and loads only the tools it needs.

There are two variants:

Basic Usage


import { anthropic } from '@ai-sdk/anthropic';

import { generateText, tool } from 'ai';

import { z } from 'zod';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  prompt: 'What is the weather in San Francisco?',

  tools: {

    toolSearch: anthropic.tools.toolSearchBm25_20251119(),

    get_weather: tool({

      description: 'Get the current weather at a specific location',

      inputSchema: z.object({

        location: z.string().describe('The city and state'),

      }),

      execute: async ({ location }) => ({

        location,

        temperature: 72,

        condition: 'Sunny',

      }),

      // Defer tool here - Claude discovers these via the tool search tool

      providerOptions: {

        anthropic: { deferLoading: true },

      },

    }),

  },

});

For more precise tool matching, you can use the regex variant:


const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  prompt: 'Get the weather data',

  tools: {

    toolSearch: anthropic.tools.toolSearchRegex_20251119(),

    // ... deferred tools

  },

});

Claude will construct regex patterns like weather|temperature|forecast to find matching tools.

You can implement your own tool search logic (e.g., using embeddings or semantic search) by returning tool-reference content blocks via toModelOutput:


import { anthropic } from '@ai-sdk/anthropic';

import { generateText, tool } from 'ai';

import { z } from 'zod';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  prompt: 'What is the weather in San Francisco?',

  tools: {

    // Custom search tool

    searchTools: tool({

      description: 'Search for tools by keyword',

      inputSchema: z.object({ query: z.string() }),

      execute: async ({ query }) => {

        // Your custom search logic (embeddings, fuzzy match, etc.)

        const allTools = ['get_weather', 'get_forecast', 'get_temperature'];

        return allTools.filter(name => name.includes(query.toLowerCase()));

      },

      toModelOutput: ({ output }) => ({

        type: 'content',

        value: (output as string[]).map(toolName => ({

          type: 'custom' as const,

          providerOptions: {

            anthropic: {

              type: 'tool-reference',

              toolName,

            },

          },

        })),

      }),

    }),

    // Deferred tools

    get_weather: tool({

      description: 'Get the current weather',

      inputSchema: z.object({ location: z.string() }),

      execute: async ({ location }) => ({ location, temperature: 72 }),

      providerOptions: {

        anthropic: { deferLoading: true },

      },

    }),

  },

});

This sends tool_reference blocks to Anthropic, which loads the corresponding deferred tool schemas into Claude's context.

MCP Connectors

Anthropic supports connecting to MCP servers as part of their execution.

You can enable this feature with the mcpServers provider option:


import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  prompt: `Call the echo tool with "hello world". what does it respond with back?`,

  providerOptions: {

    anthropic: {

      mcpServers: [

        {

          type: 'url',

          name: 'echo',

          url: 'https://echo.mcp.inevitable.fyi/mcp',

          // optional: authorization token

          authorizationToken: mcpAuthToken,

          // optional: tool configuration

          toolConfiguration: {

            enabled: true,

            allowedTools: ['echo'],

          },

        },

      ],

    } satisfies AnthropicLanguageModelOptions,

  },

});

The tool calls and results are dynamic, i.e. the input and output schemas are not known.

Configuration Options

The web fetch tool supports several configuration options:

Error Handling

Web search errors are handled differently depending on whether you're using streaming or non-streaming:

**Non-streaming (generateText):**Web search errors throw exceptions that you can catch:


try {

  const result = await generateText({

    model: anthropic('claude-opus-4-20250514'),

    prompt: 'Search for something',

    tools: {

      web_search: webSearchTool,

    },

  });

} catch (error) {

  if (error.message.includes('Web search failed')) {

    console.log('Search error:', error.message);

    // Handle search error appropriately

  }

}

**Streaming (streamText):**Web search errors are delivered as error parts in the stream:


const result = await streamText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'Search for something',

  tools: {

    web_search: webSearchTool,

  },

});

for await (const part of result.textStream) {

  if (part.type === 'error') {

    console.log('Search error:', part.error);

    // Handle search error appropriately

  }

}

Code Execution

Anthropic provides a provider-defined code execution tool that gives Claude direct access to a real Python environment allowing it to execute code to inform its responses.

You can enable code execution using the provider-defined code execution tool:


import { anthropic } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const codeExecutionTool = anthropic.tools.codeExecution_20260120();

const result = await generateText({

  model: anthropic('claude-opus-4-20250514'),

  prompt:

    'Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]',

  tools: {

    code_execution: codeExecutionTool,

  },

});

Three versions are available: codeExecution_20260120 (recommended, does not require a beta header, supports Claude Opus 4.6, Sonnet 4.6, Sonnet 4.5, and Opus 4.5), codeExecution_20250825 (supports Python and Bash with enhanced file operations), and codeExecution_20250522 (supports Bash only).

Error Handling

Code execution errors are handled differently depending on whether you're using streaming or non-streaming:

**Non-streaming (generateText):**Code execution errors are delivered as tool result parts in the response:


const result = await generateText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'Execute some Python script',

  tools: {

    code_execution: codeExecutionTool,

  },

});

const toolErrors = result.content?.filter(

  content => content.type === 'tool-error',

);

toolErrors?.forEach(error => {

  console.error('Tool execution error:', {

    toolName: error.toolName,

    toolCallId: error.toolCallId,

    error: error.error,

  });

});

**Streaming (streamText):**Code execution errors are delivered as error parts in the stream:


const result = await streamText({

  model: anthropic('claude-opus-4-20250514'),

  prompt: 'Execute some Python script',

  tools: {

    code_execution: codeExecutionTool,

  },

});

for await (const part of result.textStream) {

  if (part.type === 'error') {

    console.log('Code execution error:', part.error);

    // Handle code execution error appropriately

  }

}

Programmatic Tool Calling

Programmatic Tool Calling allows Claude to write code that calls your tools programmatically within a code execution container, rather than requiring round trips through the model for each tool invocation. This reduces latency for multi-tool workflows and decreases token consumption.

To enable programmatic tool calling, use the allowedCallers provider option on tools that you want to be callable from within code execution:


import {

  anthropic,

  forwardAnthropicContainerIdFromLastStep,

} from '@ai-sdk/anthropic';

import { generateText, tool, stepCountIs } from 'ai';

import { z } from 'zod';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  stopWhen: stepCountIs(10),

  prompt:

    'Get the weather for Tokyo, Sydney, and London, then calculate the average temperature.',

  tools: {

    code_execution: anthropic.tools.codeExecution_20260120(),

    getWeather: tool({

      description: 'Get current weather data for a city.',

      inputSchema: z.object({

        city: z.string().describe('Name of the city'),

      }),

      execute: async ({ city }) => {

        // Your weather API implementation

        return { temp: 22, condition: 'Sunny' };

      },

      // Enable this tool to be called from within code execution

      providerOptions: {

        anthropic: {

          allowedCallers: ['code_execution_20260120'],

        },

      },

    }),

  },

  // Propagate container ID between steps for code execution continuity

  prepareStep: forwardAnthropicContainerIdFromLastStep,

});

In this flow:

  1. Claude writes Python code that calls your getWeather tool multiple times in parallel
  2. The SDK automatically executes your tool and returns results to the code execution container
  3. Claude processes the results in code and generates the final response

Programmatic tool calling requires claude-sonnet-4-6, claude-sonnet-4-5,claude-opus-4-6, or claude-opus-4-5 models and uses thecode_execution_20260120 or code_execution_20250825 tool.

Container Persistence

When using programmatic tool calling across multiple steps, you need to preserve the container ID between steps using prepareStep. You can use the forwardAnthropicContainerIdFromLastStep helper function to do this automatically. The container ID is available in providerMetadata.anthropic.container.id after each step completes.

Agent Skills

Anthropic Agent Skills enable Claude to perform specialized tasks like document processing (PPTX, DOCX, PDF, XLSX) and data analysis. Skills run in a sandboxed container and require the code execution tool to be enabled.

Using Built-in Skills

Anthropic provides several built-in skills:

To use skills, you need to:

  1. Enable the code execution tool
  2. Specify the container with skills in providerOptions

import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

import { generateText } from 'ai';

const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  tools: {

    code_execution: anthropic.tools.codeExecution_20260120(),

  },

  prompt: 'Create a presentation about renewable energy with 5 slides',

  providerOptions: {

    anthropic: {

      container: {

        skills: [

          {

            type: 'anthropic',

            skillId: 'pptx',

            version: 'latest', // optional

          },

        ],

      },

    } satisfies AnthropicLanguageModelOptions,

  },

});

Custom Skills

You can also use custom skills by specifying type: 'custom':


const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  tools: {

    code_execution: anthropic.tools.codeExecution_20260120(),

  },

  prompt: 'Use my custom skill to process this data',

  providerOptions: {

    anthropic: {

      container: {

        skills: [

          {

            type: 'custom',

            skillId: 'my-custom-skill-id',

            version: '1.0', // optional

          },

        ],

      },

    } satisfies AnthropicLanguageModelOptions,

  },

});

Skills use progressive context loading and execute within a sandboxed container with code execution capabilities.

PDF support

Anthropic Claude models support reading PDF files. You can pass PDF files as part of the message content using the file type:

Option 1: URL-based PDF document


const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  messages: [

    {

      role: 'user',

      content: [

        {

          type: 'text',

          text: 'What is an embedding model according to this document?',

        },

        {

          type: 'file',

          data: new URL(

            'https://github.com/vercel/ai/blob/main/examples/ai-functions/data/ai.pdf?raw=true',

          ),

          mimeType: 'application/pdf',

        },

      ],

    },

  ],

});

Option 2: Base64-encoded PDF document


const result = await generateText({

  model: anthropic('claude-sonnet-4-5'),

  messages: [

    {

      role: 'user',

      content: [

        {

          type: 'text',

          text: 'What is an embedding model according to this document?',

        },

        {

          type: 'file',

          data: fs.readFileSync('./data/ai.pdf'),

          mediaType: 'application/pdf',

        },

      ],

    },

  ],

});

The model will have access to the contents of the PDF file and respond to questions about it. The PDF file should be passed using the data field, and the mediaType should be set to 'application/pdf'.

Model Capabilities

Model Image Input Object Generation Tool Usage Computer Use Web Search Tool Search Compaction
claude-opus-4-6
claude-sonnet-4-6
claude-opus-4-5
claude-haiku-4-5
claude-sonnet-4-5
claude-opus-4-1
claude-opus-4-0
claude-sonnet-4-0

The table above lists popular models. Please see the Anthropic docs for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed.