Use tools (original) (raw)

Tools are a way to encapsulate a function and its input schema in a way that can be passed to a chat model that supports tool calling. This allows the model to request the execution of this function with specific inputs. This guide shows how you can create tools and use them in your graphs.

Define simple tools

To create tools, you can use @tool decorator or vanilla Python functions.

@tool decoratorPython functions

[](#%5F%5Fcodelineno-0-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-0-2) [](#%5F%5Fcodelineno-0-3)@tool [](#%5F%5Fcodelineno-0-4)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-0-5) """Multiply two numbers.""" [](#%5F%5Fcodelineno-0-6) return a * b

This requires using LangGraph's prebuilt [ToolNode](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.ToolNode " ToolNode") or agent, which automatically convert the functions to LangChain tools.

[](#%5F%5Fcodelineno-1-1)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-1-2) """Multiply two numbers.""" [](#%5F%5Fcodelineno-1-3) return a * b

Customize tools

For more control over tool behavior, use the @tool decorator:

API Reference: tool

[](#%5F%5Fcodelineno-2-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-2-2) [](#%5F%5Fcodelineno-2-3)@tool("multiply_tool", parse_docstring=True) [](#%5F%5Fcodelineno-2-4)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-2-5) """Multiply two numbers. [](#%5F%5Fcodelineno-2-6) [](#%5F%5Fcodelineno-2-7) Args: [](#%5F%5Fcodelineno-2-8) a: First operand [](#%5F%5Fcodelineno-2-9) b: Second operand [](#%5F%5Fcodelineno-2-10) """ [](#%5F%5Fcodelineno-2-11) return a * b

You can also define a custom input schema using Pydantic:

[](#%5F%5Fcodelineno-3-1)from pydantic import BaseModel, Field [](#%5F%5Fcodelineno-3-2) [](#%5F%5Fcodelineno-3-3)class MultiplyInputSchema(BaseModel): [](#%5F%5Fcodelineno-3-4) """Multiply two numbers""" [](#%5F%5Fcodelineno-3-5) a: int = Field(description="First operand") [](#%5F%5Fcodelineno-3-6) b: int = Field(description="Second operand") [](#%5F%5Fcodelineno-3-7) [](#%5F%5Fcodelineno-3-8)@tool("multiply_tool", args_schema=MultiplyInputSchema) [](#%5F%5Fcodelineno-3-9)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-3-10) return a * b

For additional customization, refer to the custom tools guide.

Hide arguments from the model

Some tools require runtime-only arguments (e.g., user ID or session context) that should not be controllable by the model.

You can put these arguments in the state or config of the agent, and access this information inside the tool:

API Reference: tool | RunnableConfig | InjectedState

[](#%5F%5Fcodelineno-4-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-4-2)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-4-3)from langgraph.prebuilt import InjectedState [](#%5F%5Fcodelineno-4-4)from langgraph.graph import MessagesState [](#%5F%5Fcodelineno-4-5) [](#%5F%5Fcodelineno-4-6)@tool [](#%5F%5Fcodelineno-4-7)def my_tool( [](#%5F%5Fcodelineno-4-8) # This will be populated by an LLM [](#%5F%5Fcodelineno-4-9) tool_arg: str, [](#%5F%5Fcodelineno-4-10) # access information that's dynamically updated inside the agent [](#%5F%5Fcodelineno-4-11) state: Annotated[MessagesState, InjectedState], [](#%5F%5Fcodelineno-4-12) # access static data that is passed at agent invocation [](#%5F%5Fcodelineno-4-13) config: RunnableConfig, [](#%5F%5Fcodelineno-4-14)) -> str: [](#%5F%5Fcodelineno-4-15) """My tool.""" [](#%5F%5Fcodelineno-4-16) do_something_with_state(state["messages"]) [](#%5F%5Fcodelineno-4-17) do_something_with_config(config) [](#%5F%5Fcodelineno-4-18) ...

Access config

You can provide static information to the graph at runtime, like a user_id or API credentials. This information can be accessed inside the tools through a special parameter annotationRunnableConfig:

API Reference: RunnableConfig | tool

[](#%5F%5Fcodelineno-5-1)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-5-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-5-3) [](#%5F%5Fcodelineno-5-4)@tool [](#%5F%5Fcodelineno-5-5)def get_user_info( [](#%5F%5Fcodelineno-5-6) config: RunnableConfig, [](#%5F%5Fcodelineno-5-7)) -> str: [](#%5F%5Fcodelineno-5-8) """Look up user info.""" [](#%5F%5Fcodelineno-5-9) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-5-10) return "User is John Smith" if user_id == "user_123" else "Unknown user"

Access config in tools

[](#%5F%5Fcodelineno-6-1)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-6-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-6-3)from langgraph.prebuilt import create_react_agent [](#%5F%5Fcodelineno-6-4) [](#%5F%5Fcodelineno-6-5)def get_user_info( [](#%5F%5Fcodelineno-6-6) config: RunnableConfig, [](#%5F%5Fcodelineno-6-7)) -> str: [](#%5F%5Fcodelineno-6-8) """Look up user info.""" [](#%5F%5Fcodelineno-6-9) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-6-10) return "User is John Smith" if user_id == "user_123" else "Unknown user" [](#%5F%5Fcodelineno-6-11) [](#%5F%5Fcodelineno-6-12)agent = create_react_agent( [](#%5F%5Fcodelineno-6-13) model="anthropic:claude-3-7-sonnet-latest", [](#%5F%5Fcodelineno-6-14) tools=[get_user_info], [](#%5F%5Fcodelineno-6-15)) [](#%5F%5Fcodelineno-6-16) [](#%5F%5Fcodelineno-6-17)agent.invoke( [](#%5F%5Fcodelineno-6-18) {"messages": [{"role": "user", "content": "look up user information"}]}, [](#%5F%5Fcodelineno-6-19) config={"configurable": {"user_id": "user_123"}} [](#%5F%5Fcodelineno-6-20))

Short-term memory

LangGraph allows agents to access and update their short-term memory (state) inside the tools.

Read state

To access the graph state inside the tools, you can use a special parameter annotation — [InjectedState](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.InjectedState " InjectedState"):

API Reference: tool | InjectedState

[](#%5F%5Fcodelineno-7-1)from typing import Annotated [](#%5F%5Fcodelineno-7-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-7-3)from langgraph.prebuilt import InjectedState [](#%5F%5Fcodelineno-7-4) [](#%5F%5Fcodelineno-7-5)class CustomState(AgentState): [](#%5F%5Fcodelineno-7-6) user_id: str [](#%5F%5Fcodelineno-7-7) [](#%5F%5Fcodelineno-7-8)@tool [](#%5F%5Fcodelineno-7-9)def get_user_info( [](#%5F%5Fcodelineno-7-10) state: Annotated[CustomState, InjectedState] [](#%5F%5Fcodelineno-7-11)) -> str: [](#%5F%5Fcodelineno-7-12) """Look up user info.""" [](#%5F%5Fcodelineno-7-13) user_id = state["user_id"] [](#%5F%5Fcodelineno-7-14) return "User is John Smith" if user_id == "user_123" else "Unknown user"

Access state in tools

[](#%5F%5Fcodelineno-8-1)from typing import Annotated [](#%5F%5Fcodelineno-8-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-8-3)from langgraph.prebuilt import InjectedState, create_react_agent [](#%5F%5Fcodelineno-8-4) [](#%5F%5Fcodelineno-8-5)class CustomState(AgentState): [](#%5F%5Fcodelineno-8-6) user_id: str [](#%5F%5Fcodelineno-8-7) [](#%5F%5Fcodelineno-8-8)@tool [](#%5F%5Fcodelineno-8-9)def get_user_info( [](#%5F%5Fcodelineno-8-10) state: Annotated[CustomState, InjectedState] [](#%5F%5Fcodelineno-8-11)) -> str: [](#%5F%5Fcodelineno-8-12) """Look up user info.""" [](#%5F%5Fcodelineno-8-13) user_id = state["user_id"] [](#%5F%5Fcodelineno-8-14) return "User is John Smith" if user_id == "user_123" else "Unknown user" [](#%5F%5Fcodelineno-8-15) [](#%5F%5Fcodelineno-8-16)agent = create_react_agent( [](#%5F%5Fcodelineno-8-17) model="anthropic:claude-3-7-sonnet-latest", [](#%5F%5Fcodelineno-8-18) tools=[get_user_info], [](#%5F%5Fcodelineno-8-19) state_schema=CustomState, [](#%5F%5Fcodelineno-8-20)) [](#%5F%5Fcodelineno-8-21) [](#%5F%5Fcodelineno-8-22)agent.invoke({ [](#%5F%5Fcodelineno-8-23) "messages": "look up user information", [](#%5F%5Fcodelineno-8-24) "user_id": "user_123" [](#%5F%5Fcodelineno-8-25)})

Update state

You can return state updates directly from the tools. This is useful for persisting intermediate results or making information accessible to subsequent tools or prompts.

API Reference: Command | tool | InjectedToolCallId

[](#%5F%5Fcodelineno-9-1)from langgraph.graph import MessagesState [](#%5F%5Fcodelineno-9-2)from langgraph.types import Command [](#%5F%5Fcodelineno-9-3)from langchain_core.tools import tool, InjectedToolCallId [](#%5F%5Fcodelineno-9-4) [](#%5F%5Fcodelineno-9-5)class CustomState(MessagesState): [](#%5F%5Fcodelineno-9-6) user_name: str [](#%5F%5Fcodelineno-9-7) [](#%5F%5Fcodelineno-9-8)@tool [](#%5F%5Fcodelineno-9-9)def update_user_info( [](#%5F%5Fcodelineno-9-10) tool_call_id: Annotated[str, InjectedToolCallId], [](#%5F%5Fcodelineno-9-11) config: RunnableConfig [](#%5F%5Fcodelineno-9-12)) -> Command: [](#%5F%5Fcodelineno-9-13) """Look up and update user info.""" [](#%5F%5Fcodelineno-9-14) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-9-15) name = "John Smith" if user_id == "user_123" else "Unknown user" [](#%5F%5Fcodelineno-9-16) return Command(update={ [](#%5F%5Fcodelineno-9-17) "user_name": name, [](#%5F%5Fcodelineno-9-18) # update the message history [](#%5F%5Fcodelineno-9-19) "messages": [ [](#%5F%5Fcodelineno-9-20) ToolMessage( [](#%5F%5Fcodelineno-9-21) "Successfully looked up user information", [](#%5F%5Fcodelineno-9-22) tool_call_id=tool_call_id [](#%5F%5Fcodelineno-9-23) ) [](#%5F%5Fcodelineno-9-24) ] [](#%5F%5Fcodelineno-9-25) })

Update state from tools

This is an example of using the prebuilt agent with a tool that can update graph state.

[](#%5F%5Fcodelineno-10-1)from typing import Annotated [](#%5F%5Fcodelineno-10-2)from langchain_core.tools import tool, InjectedToolCallId [](#%5F%5Fcodelineno-10-3)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-10-4)from langchain_core.messages import ToolMessage [](#%5F%5Fcodelineno-10-5)from langgraph.prebuilt import InjectedState, create_react_agent [](#%5F%5Fcodelineno-10-6)from langgraph.prebuilt.chat_agent_executor import AgentState [](#%5F%5Fcodelineno-10-7)from langgraph.types import Command [](#%5F%5Fcodelineno-10-8) [](#%5F%5Fcodelineno-10-9)class CustomState(AgentState): [](#%5F%5Fcodelineno-10-10) user_name: str [](#%5F%5Fcodelineno-10-11) [](#%5F%5Fcodelineno-10-12)@tool [](#%5F%5Fcodelineno-10-13)def update_user_info( [](#%5F%5Fcodelineno-10-14) tool_call_id: Annotated[str, InjectedToolCallId], [](#%5F%5Fcodelineno-10-15) config: RunnableConfig [](#%5F%5Fcodelineno-10-16)) -> Command: [](#%5F%5Fcodelineno-10-17) """Look up and update user info.""" [](#%5F%5Fcodelineno-10-18) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-10-19) name = "John Smith" if user_id == "user_123" else "Unknown user" [](#%5F%5Fcodelineno-10-20) return Command(update={ [](#%5F%5Fcodelineno-10-21) "user_name": name, [](#%5F%5Fcodelineno-10-22) # update the message history [](#%5F%5Fcodelineno-10-23) "messages": [ [](#%5F%5Fcodelineno-10-24) ToolMessage( [](#%5F%5Fcodelineno-10-25) "Successfully looked up user information", [](#%5F%5Fcodelineno-10-26) tool_call_id=tool_call_id [](#%5F%5Fcodelineno-10-27) ) [](#%5F%5Fcodelineno-10-28) ] [](#%5F%5Fcodelineno-10-29) }) [](#%5F%5Fcodelineno-10-30) [](#%5F%5Fcodelineno-10-31)def greet( [](#%5F%5Fcodelineno-10-32) state: Annotated[CustomState, InjectedState] [](#%5F%5Fcodelineno-10-33)) -> str: [](#%5F%5Fcodelineno-10-34) """Use this to greet the user once you found their info.""" [](#%5F%5Fcodelineno-10-35) user_name = state["user_name"] [](#%5F%5Fcodelineno-10-36) return f"Hello {user_name}!" [](#%5F%5Fcodelineno-10-37) [](#%5F%5Fcodelineno-10-38)agent = create_react_agent( [](#%5F%5Fcodelineno-10-39) model="anthropic:claude-3-7-sonnet-latest", [](#%5F%5Fcodelineno-10-40) tools=[get_user_info, greet], [](#%5F%5Fcodelineno-10-41) state_schema=CustomState [](#%5F%5Fcodelineno-10-42)) [](#%5F%5Fcodelineno-10-43) [](#%5F%5Fcodelineno-10-44)agent.invoke( [](#%5F%5Fcodelineno-10-45) {"messages": [{"role": "user", "content": "greet the user"}]}, [](#%5F%5Fcodelineno-10-46) config={"configurable": {"user_id": "user_123"}} [](#%5F%5Fcodelineno-10-47))

Important

If you want to use tools that return Command and update graph state, you can either use prebuilt [create_react_agent](../../reference/agents/#langgraph.prebuilt.chat%5Fagent%5Fexecutor.create%5Freact%5Fagent " create_react_agent") / [ToolNode](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.ToolNode " ToolNode") components, or implement your own tool-executing node that collects Command objects returned by the tools and returns a list of them, e.g.:

[](#%5F%5Fcodelineno-11-1)def call_tools(state): [](#%5F%5Fcodelineno-11-2) ... [](#%5F%5Fcodelineno-11-3) commands = [tools_by_name[tool_call["name"]].invoke(tool_call) for tool_call in tool_calls] [](#%5F%5Fcodelineno-11-4) return commands

Long-term memory

Use long-term memory to store user-specific or application-specific data across conversations. This is useful for applications like chatbots, where you want to remember user preferences or other information.

To use long-term memory, you need to:

  1. Configure a store to persist data across invocations.
  2. Use the [get_store](../../reference/config/#langgraph.config.get%5Fstore " get_store") function to access the store from within tools or prompts.

Read

API Reference: RunnableConfig | tool | StateGraph | get_store

[](#%5F%5Fcodelineno-12-1)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-12-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-12-3)from langgraph.graph import StateGraph [](#%5F%5Fcodelineno-12-4)from langgraph.config import get_store [](#%5F%5Fcodelineno-12-5) [](#%5F%5Fcodelineno-12-6)@tool [](#%5F%5Fcodelineno-12-7)def get_user_info(config: RunnableConfig) -> str: [](#%5F%5Fcodelineno-12-8) """Look up user info.""" [](#%5F%5Fcodelineno-12-9) # Same as that provided to `builder.compile(store=store)` [](#%5F%5Fcodelineno-12-10) # or `create_react_agent` [](#%5F%5Fcodelineno-12-11) store = get_store() [](#%5F%5Fcodelineno-12-12) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-12-13) user_info = store.get(("users",), user_id) [](#%5F%5Fcodelineno-12-14) return str(user_info.value) if user_info else "Unknown user" [](#%5F%5Fcodelineno-12-15) [](#%5F%5Fcodelineno-12-16)builder = StateGraph(...) [](#%5F%5Fcodelineno-12-17)... [](#%5F%5Fcodelineno-12-18)graph = builder.compile(store=store)

Access long-term memory

[](#%5F%5Fcodelineno-13-1)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-13-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-13-3)from langgraph.config import get_store [](#%5F%5Fcodelineno-13-4)from langgraph.prebuilt import create_react_agent [](#%5F%5Fcodelineno-13-5)from langgraph.store.memory import InMemoryStore [](#%5F%5Fcodelineno-13-6) [](#%5F%5Fcodelineno-13-7)store = InMemoryStore() # (1)! [](#%5F%5Fcodelineno-13-8) [](#%5F%5Fcodelineno-13-9)store.put( # (2)! [](#%5F%5Fcodelineno-13-10) ("users",), # (3)! [](#%5F%5Fcodelineno-13-11) "user_123", # (4)! [](#%5F%5Fcodelineno-13-12) { [](#%5F%5Fcodelineno-13-13) "name": "John Smith", [](#%5F%5Fcodelineno-13-14) "language": "English", [](#%5F%5Fcodelineno-13-15) } # (5)! [](#%5F%5Fcodelineno-13-16)) [](#%5F%5Fcodelineno-13-17) [](#%5F%5Fcodelineno-13-18)@tool [](#%5F%5Fcodelineno-13-19)def get_user_info(config: RunnableConfig) -> str: [](#%5F%5Fcodelineno-13-20) """Look up user info.""" [](#%5F%5Fcodelineno-13-21) # Same as that provided to `create_react_agent` [](#%5F%5Fcodelineno-13-22) store = get_store() # (6)! [](#%5F%5Fcodelineno-13-23) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-13-24) user_info = store.get(("users",), user_id) # (7)! [](#%5F%5Fcodelineno-13-25) return str(user_info.value) if user_info else "Unknown user" [](#%5F%5Fcodelineno-13-26) [](#%5F%5Fcodelineno-13-27)agent = create_react_agent( [](#%5F%5Fcodelineno-13-28) model="anthropic:claude-3-7-sonnet-latest", [](#%5F%5Fcodelineno-13-29) tools=[get_user_info], [](#%5F%5Fcodelineno-13-30) store=store # (8)! [](#%5F%5Fcodelineno-13-31)) [](#%5F%5Fcodelineno-13-32) [](#%5F%5Fcodelineno-13-33)# Run the agent [](#%5F%5Fcodelineno-13-34)agent.invoke( [](#%5F%5Fcodelineno-13-35) {"messages": [{"role": "user", "content": "look up user information"}]}, [](#%5F%5Fcodelineno-13-36) config={"configurable": {"user_id": "user_123"}} [](#%5F%5Fcodelineno-13-37))

  1. The InMemoryStore is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you're deploying with LangGraph Platform, the platform will provide a production-ready store for you.
  2. For this example, we write some sample data to the store using the put method. Please see the [BaseStore.put](../../reference/store/#langgraph.store.base.BaseStore.put " put") API reference for more details.
  3. The first argument is the namespace. This is used to group related data together. In this case, we are using the users namespace to group user data.
  4. A key within the namespace. This example uses a user ID for the key.
  5. The data that we want to store for the given user.
  6. The get_store function is used to access the store. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created.
  7. The get method is used to retrieve data from the store. The first argument is the namespace, and the second argument is the key. This will return a StoreValue object, which contains the value and metadata about the value.
  8. The store is passed to the agent. This enables the agent to access the store when running tools. You can also use the get_store function to access the store from anywhere in your code.

Update

API Reference: RunnableConfig | tool | StateGraph | get_store

[](#%5F%5Fcodelineno-14-1)from langchain_core.runnables import RunnableConfig [](#%5F%5Fcodelineno-14-2)from langchain_core.tools import tool [](#%5F%5Fcodelineno-14-3)from langgraph.graph import StateGraph [](#%5F%5Fcodelineno-14-4)from langgraph.config import get_store [](#%5F%5Fcodelineno-14-5) [](#%5F%5Fcodelineno-14-6)@tool [](#%5F%5Fcodelineno-14-7)def save_user_info(user_info: str, config: RunnableConfig) -> str: [](#%5F%5Fcodelineno-14-8) """Save user info.""" [](#%5F%5Fcodelineno-14-9) # Same as that provided to `builder.compile(store=store)` [](#%5F%5Fcodelineno-14-10) # or `create_react_agent` [](#%5F%5Fcodelineno-14-11) store = get_store() [](#%5F%5Fcodelineno-14-12) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-14-13) store.put(("users",), user_id, user_info) [](#%5F%5Fcodelineno-14-14) return "Successfully saved user info." [](#%5F%5Fcodelineno-14-15) [](#%5F%5Fcodelineno-14-16)builder = StateGraph(...) [](#%5F%5Fcodelineno-14-17)... [](#%5F%5Fcodelineno-14-18)graph = builder.compile(store=store)

Update long-term memory

[](#%5F%5Fcodelineno-15-1)from typing_extensions import TypedDict [](#%5F%5Fcodelineno-15-2) [](#%5F%5Fcodelineno-15-3)from langchain_core.tools import tool [](#%5F%5Fcodelineno-15-4)from langgraph.config import get_store [](#%5F%5Fcodelineno-15-5)from langgraph.prebuilt import create_react_agent [](#%5F%5Fcodelineno-15-6)from langgraph.store.memory import InMemoryStore [](#%5F%5Fcodelineno-15-7) [](#%5F%5Fcodelineno-15-8)store = InMemoryStore() # (1)! [](#%5F%5Fcodelineno-15-9) [](#%5F%5Fcodelineno-15-10)class UserInfo(TypedDict): # (2)! [](#%5F%5Fcodelineno-15-11) name: str [](#%5F%5Fcodelineno-15-12) [](#%5F%5Fcodelineno-15-13)@tool [](#%5F%5Fcodelineno-15-14)def save_user_info(user_info: UserInfo, config: RunnableConfig) -> str: # (3)! [](#%5F%5Fcodelineno-15-15) """Save user info.""" [](#%5F%5Fcodelineno-15-16) # Same as that provided to `create_react_agent` [](#%5F%5Fcodelineno-15-17) store = get_store() # (4)! [](#%5F%5Fcodelineno-15-18) user_id = config["configurable"].get("user_id") [](#%5F%5Fcodelineno-15-19) store.put(("users",), user_id, user_info) # (5)! [](#%5F%5Fcodelineno-15-20) return "Successfully saved user info." [](#%5F%5Fcodelineno-15-21) [](#%5F%5Fcodelineno-15-22)agent = create_react_agent( [](#%5F%5Fcodelineno-15-23) model="anthropic:claude-3-7-sonnet-latest", [](#%5F%5Fcodelineno-15-24) tools=[save_user_info], [](#%5F%5Fcodelineno-15-25) store=store [](#%5F%5Fcodelineno-15-26)) [](#%5F%5Fcodelineno-15-27) [](#%5F%5Fcodelineno-15-28)# Run the agent [](#%5F%5Fcodelineno-15-29)agent.invoke( [](#%5F%5Fcodelineno-15-30) {"messages": [{"role": "user", "content": "My name is John Smith"}]}, [](#%5F%5Fcodelineno-15-31) config={"configurable": {"user_id": "user_123"}} # (6)! [](#%5F%5Fcodelineno-15-32)) [](#%5F%5Fcodelineno-15-33) [](#%5F%5Fcodelineno-15-34)# You can access the store directly to get the value [](#%5F%5Fcodelineno-15-35)store.get(("users",), "user_123").value

  1. The InMemoryStore is a store that stores data in memory. In a production setting, you would typically use a database or other persistent storage. Please review the store documentation for more options. If you're deploying with LangGraph Platform, the platform will provide a production-ready store for you.
  2. The UserInfo class is a TypedDict that defines the structure of the user information. The LLM will use this to format the response according to the schema.
  3. The save_user_info function is a tool that allows an agent to update user information. This could be useful for a chat application where the user wants to update their profile information.
  4. The get_store function is used to access the store. You can call it from anywhere in your code, including tools and prompts. This function returns the store that was passed to the agent when it was created.
  5. The put method is used to store data in the store. The first argument is the namespace, and the second argument is the key. This will store the user information in the store.
  6. The user_id is passed in the config. This is used to identify the user whose information is being updated.

To attach tool schemas to a chat model you need to use model.bind_tools():

API Reference: tool | init_chat_model

[](#%5F%5Fcodelineno-16-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-16-2)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-16-3) [](#%5F%5Fcodelineno-16-4)@tool [](#%5F%5Fcodelineno-16-5)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-16-6) """Multiply two numbers.""" [](#%5F%5Fcodelineno-16-7) return a * b [](#%5F%5Fcodelineno-16-8) [](#%5F%5Fcodelineno-16-9)model = init_chat_model(model="claude-3-5-haiku-latest") [](#%5F%5Fcodelineno-16-10)model_with_tools = model.bind_tools([multiply]) [](#%5F%5Fcodelineno-16-11) [](#%5F%5Fcodelineno-16-12)model_with_tools.invoke("what's 42 x 7?")

[](#%5F%5Fcodelineno-17-1)AIMessage( [](#%5F%5Fcodelineno-17-2) content=[{'text': "I'll help you calculate that by using the multiply function.", 'type': 'text'}, {'id': 'toolu_01GhULkqytMTFDsNv6FsXy3Y', 'input': {'a': 42, 'b': 7}, 'name': 'multiply', 'type': 'tool_use'}] [](#%5F%5Fcodelineno-17-3) tool_calls=[{'name': 'multiply', 'args': {'a': 42, 'b': 7}, 'id': 'toolu_01GhULkqytMTFDsNv6FsXy3Y', 'type': 'tool_call'}] [](#%5F%5Fcodelineno-17-4))

LangChain tools conform to the Runnable interface, which means that you can execute them using .invoke() / .ainvoke() methods:

API Reference: tool

[](#%5F%5Fcodelineno-18-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-18-2) [](#%5F%5Fcodelineno-18-3)@tool [](#%5F%5Fcodelineno-18-4)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-18-5) """Multiply two numbers.""" [](#%5F%5Fcodelineno-18-6) return a * b [](#%5F%5Fcodelineno-18-7) [](#%5F%5Fcodelineno-18-8)multiply.invoke({"a": 42, "b": 7})

If you want the tool to return a ToolMessage, invoke it with the tool call:

[](#%5F%5Fcodelineno-20-1)tool_call = { [](#%5F%5Fcodelineno-20-2) "type": "tool_call", [](#%5F%5Fcodelineno-20-3) "id": "1", [](#%5F%5Fcodelineno-20-4) "args": {"a": 42, "b": 7} [](#%5F%5Fcodelineno-20-5)} [](#%5F%5Fcodelineno-20-6)multiply.invoke(tool_call)

[](#%5F%5Fcodelineno-21-1)ToolMessage(content='294', name='multiply', tool_call_id='1')

Use with a chat model

[](#%5F%5Fcodelineno-22-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-22-2)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-22-3) [](#%5F%5Fcodelineno-22-4)@tool [](#%5F%5Fcodelineno-22-5)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-22-6) """Multiply two numbers.""" [](#%5F%5Fcodelineno-22-7) return a * b [](#%5F%5Fcodelineno-22-8) [](#%5F%5Fcodelineno-22-9)model = init_chat_model(model="claude-3-5-haiku-latest") [](#%5F%5Fcodelineno-22-10)model_with_tools = model.bind_tools([multiply]) [](#%5F%5Fcodelineno-22-11) [](#%5F%5Fcodelineno-22-12)response_message = model_with_tools.invoke("what's 42 x 7?") [](#%5F%5Fcodelineno-22-13)tool_call = response_message.tool_calls[0] [](#%5F%5Fcodelineno-22-14) [](#%5F%5Fcodelineno-22-15)multiply.invoke(tool_call)

[](#%5F%5Fcodelineno-23-1)ToolMessage(content='294', name='multiply', tool_call_id='toolu_0176DV4YKSD8FndkeuuLj36c')

Use prebuilt agent

To create a tool-calling agent, you can use the prebuilt [create_react_agent](../../reference/agents/#langgraph.prebuilt.chat%5Fagent%5Fexecutor.create%5Freact%5Fagent " create_react_agent")

API Reference: tool | create_react_agent

[](#%5F%5Fcodelineno-24-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-24-2)from langgraph.prebuilt import create_react_agent [](#%5F%5Fcodelineno-24-3) [](#%5F%5Fcodelineno-24-4)@tool [](#%5F%5Fcodelineno-24-5)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-24-6) """Multiply two numbers.""" [](#%5F%5Fcodelineno-24-7) return a * b [](#%5F%5Fcodelineno-24-8) [](#%5F%5Fcodelineno-24-9)agent = create_react_agent( [](#%5F%5Fcodelineno-24-10) model="anthropic:claude-3-7-sonnet", [](#%5F%5Fcodelineno-24-11) tools=[multiply] [](#%5F%5Fcodelineno-24-12)) [](#%5F%5Fcodelineno-24-13)graph.invoke({"messages": [{"role": "user", "content": "what's 42 x 7?"}]})

See this guide to learn more.

[ToolNode](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.ToolNode " ToolNode") is a prebuilt LangGraph node for executing tool calls.

Why use ToolNode?

ToolNode operates on MessagesState:

Tip

ToolNode is designed to work well out-of-box with LangGraph's prebuilt agent, but can also work with any StateGraph that uses MessagesState.

API Reference: ToolNode

[](#%5F%5Fcodelineno-25-1)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-25-2) [](#%5F%5Fcodelineno-25-3)def get_weather(location: str): [](#%5F%5Fcodelineno-25-4) """Call to get the current weather.""" [](#%5F%5Fcodelineno-25-5) if location.lower() in ["sf", "san francisco"]: [](#%5F%5Fcodelineno-25-6) return "It's 60 degrees and foggy." [](#%5F%5Fcodelineno-25-7) else: [](#%5F%5Fcodelineno-25-8) return "It's 90 degrees and sunny." [](#%5F%5Fcodelineno-25-9) [](#%5F%5Fcodelineno-25-10)def get_coolest_cities(): [](#%5F%5Fcodelineno-25-11) """Get a list of coolest cities""" [](#%5F%5Fcodelineno-25-12) return "nyc, sf" [](#%5F%5Fcodelineno-25-13) [](#%5F%5Fcodelineno-25-14)tool_node = ToolNode([get_weather, get_coolest_cities]) [](#%5F%5Fcodelineno-25-15)tool_node.invoke({"messages": [...]})

Single tool call

[](#%5F%5Fcodelineno-26-1)from langchain_core.messages import AIMessage [](#%5F%5Fcodelineno-26-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-26-3) [](#%5F%5Fcodelineno-26-4)# Define tools [](#%5F%5Fcodelineno-26-5)@tool [](#%5F%5Fcodelineno-26-6)def get_weather(location: str): [](#%5F%5Fcodelineno-26-7) """Call to get the current weather.""" [](#%5F%5Fcodelineno-26-8) if location.lower() in ["sf", "san francisco"]: [](#%5F%5Fcodelineno-26-9) return "It's 60 degrees and foggy." [](#%5F%5Fcodelineno-26-10) else: [](#%5F%5Fcodelineno-26-11) return "It's 90 degrees and sunny." [](#%5F%5Fcodelineno-26-12) [](#%5F%5Fcodelineno-26-13)tool_node = ToolNode([get_weather]) [](#%5F%5Fcodelineno-26-14) [](#%5F%5Fcodelineno-26-15)message_with_single_tool_call = AIMessage( [](#%5F%5Fcodelineno-26-16) content="", [](#%5F%5Fcodelineno-26-17) tool_calls=[ [](#%5F%5Fcodelineno-26-18) { [](#%5F%5Fcodelineno-26-19) "name": "get_weather", [](#%5F%5Fcodelineno-26-20) "args": {"location": "sf"}, [](#%5F%5Fcodelineno-26-21) "id": "tool_call_id", [](#%5F%5Fcodelineno-26-22) "type": "tool_call", [](#%5F%5Fcodelineno-26-23) } [](#%5F%5Fcodelineno-26-24) ], [](#%5F%5Fcodelineno-26-25)) [](#%5F%5Fcodelineno-26-26) [](#%5F%5Fcodelineno-26-27)tool_node.invoke({"messages": [message_with_single_tool_call]})

[](#%5F%5Fcodelineno-27-1){'messages': [ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='tool_call_id')]}

Multiple tool calls

[](#%5F%5Fcodelineno-28-1)from langchain_core.messages import AIMessage [](#%5F%5Fcodelineno-28-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-28-3) [](#%5F%5Fcodelineno-28-4)# Define tools [](#%5F%5Fcodelineno-28-5) [](#%5F%5Fcodelineno-28-6)def get_weather(location: str): [](#%5F%5Fcodelineno-28-7) """Call to get the current weather.""" [](#%5F%5Fcodelineno-28-8) if location.lower() in ["sf", "san francisco"]: [](#%5F%5Fcodelineno-28-9) return "It's 60 degrees and foggy." [](#%5F%5Fcodelineno-28-10) else: [](#%5F%5Fcodelineno-28-11) return "It's 90 degrees and sunny." [](#%5F%5Fcodelineno-28-12) [](#%5F%5Fcodelineno-28-13)def get_coolest_cities(): [](#%5F%5Fcodelineno-28-14) """Get a list of coolest cities""" [](#%5F%5Fcodelineno-28-15) return "nyc, sf" [](#%5F%5Fcodelineno-28-16) [](#%5F%5Fcodelineno-28-17)tool_node = ToolNode([get_weather, get_coolest_cities]) [](#%5F%5Fcodelineno-28-18) [](#%5F%5Fcodelineno-28-19)message_with_multiple_tool_calls = AIMessage( [](#%5F%5Fcodelineno-28-20) content="", [](#%5F%5Fcodelineno-28-21) tool_calls=[ [](#%5F%5Fcodelineno-28-22) { [](#%5F%5Fcodelineno-28-23) "name": "get_coolest_cities", [](#%5F%5Fcodelineno-28-24) "args": {}, [](#%5F%5Fcodelineno-28-25) "id": "tool_call_id_1", [](#%5F%5Fcodelineno-28-26) "type": "tool_call", [](#%5F%5Fcodelineno-28-27) }, [](#%5F%5Fcodelineno-28-28) { [](#%5F%5Fcodelineno-28-29) "name": "get_weather", [](#%5F%5Fcodelineno-28-30) "args": {"location": "sf"}, [](#%5F%5Fcodelineno-28-31) "id": "tool_call_id_2", [](#%5F%5Fcodelineno-28-32) "type": "tool_call", [](#%5F%5Fcodelineno-28-33) }, [](#%5F%5Fcodelineno-28-34) ], [](#%5F%5Fcodelineno-28-35)) [](#%5F%5Fcodelineno-28-36) [](#%5F%5Fcodelineno-28-37)tool_node.invoke({"messages": [message_with_multiple_tool_calls]}) # (1)!

  1. ToolNode will execute both tools in parallel

[](#%5F%5Fcodelineno-29-1){ [](#%5F%5Fcodelineno-29-2) 'messages': [ [](#%5F%5Fcodelineno-29-3) ToolMessage(content='nyc, sf', name='get_coolest_cities', tool_call_id='tool_call_id_1'), [](#%5F%5Fcodelineno-29-4) ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='tool_call_id_2') [](#%5F%5Fcodelineno-29-5) ] [](#%5F%5Fcodelineno-29-6)}

Use with a chat model

[](#%5F%5Fcodelineno-30-1)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-30-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-30-3) [](#%5F%5Fcodelineno-30-4)def get_weather(location: str): [](#%5F%5Fcodelineno-30-5) """Call to get the current weather.""" [](#%5F%5Fcodelineno-30-6) if location.lower() in ["sf", "san francisco"]: [](#%5F%5Fcodelineno-30-7) return "It's 60 degrees and foggy." [](#%5F%5Fcodelineno-30-8) else: [](#%5F%5Fcodelineno-30-9) return "It's 90 degrees and sunny." [](#%5F%5Fcodelineno-30-10) [](#%5F%5Fcodelineno-30-11)tool_node = ToolNode([get_weather]) [](#%5F%5Fcodelineno-30-12) [](#%5F%5Fcodelineno-30-13)model = init_chat_model(model="claude-3-5-haiku-latest") [](#%5F%5Fcodelineno-30-14)model_with_tools = model.bind_tools([get_weather]) # (1)! [](#%5F%5Fcodelineno-30-15) [](#%5F%5Fcodelineno-30-16) [](#%5F%5Fcodelineno-30-17)response_message = model_with_tools.invoke("what's the weather in sf?") [](#%5F%5Fcodelineno-30-18)tool_node.invoke({"messages": [response_message]})

  1. Use .bind_tools() to attach the tool schema to the chat model

[](#%5F%5Fcodelineno-31-1){'messages': [ToolMessage(content="It's 60 degrees and foggy.", name='get_weather', tool_call_id='toolu_01Pnkgw5JeTRxXAU7tyHT4UW')]}

Use in a tool-calling agent

This is an example of creating a tool-calling agent from scratch using ToolNode. You can also use LangGraph's prebuilt agent.

[](#%5F%5Fcodelineno-32-1)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-32-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-32-3)from langgraph.graph import StateGraph, MessagesState, START, END [](#%5F%5Fcodelineno-32-4) [](#%5F%5Fcodelineno-32-5)def get_weather(location: str): [](#%5F%5Fcodelineno-32-6) """Call to get the current weather.""" [](#%5F%5Fcodelineno-32-7) if location.lower() in ["sf", "san francisco"]: [](#%5F%5Fcodelineno-32-8) return "It's 60 degrees and foggy." [](#%5F%5Fcodelineno-32-9) else: [](#%5F%5Fcodelineno-32-10) return "It's 90 degrees and sunny." [](#%5F%5Fcodelineno-32-11) [](#%5F%5Fcodelineno-32-12)tool_node = ToolNode([get_weather]) [](#%5F%5Fcodelineno-32-13) [](#%5F%5Fcodelineno-32-14)model = init_chat_model(model="claude-3-5-haiku-latest") [](#%5F%5Fcodelineno-32-15)model_with_tools = model.bind_tools([get_weather]) [](#%5F%5Fcodelineno-32-16) [](#%5F%5Fcodelineno-32-17)def should_continue(state: MessagesState): [](#%5F%5Fcodelineno-32-18) messages = state["messages"] [](#%5F%5Fcodelineno-32-19) last_message = messages[-1] [](#%5F%5Fcodelineno-32-20) if last_message.tool_calls: [](#%5F%5Fcodelineno-32-21) return "tools" [](#%5F%5Fcodelineno-32-22) return END [](#%5F%5Fcodelineno-32-23) [](#%5F%5Fcodelineno-32-24)def call_model(state: MessagesState): [](#%5F%5Fcodelineno-32-25) messages = state["messages"] [](#%5F%5Fcodelineno-32-26) response = model_with_tools.invoke(messages) [](#%5F%5Fcodelineno-32-27) return {"messages": [response]} [](#%5F%5Fcodelineno-32-28) [](#%5F%5Fcodelineno-32-29)builder = StateGraph(MessagesState) [](#%5F%5Fcodelineno-32-30) [](#%5F%5Fcodelineno-32-31)# Define the two nodes we will cycle between [](#%5F%5Fcodelineno-32-32)builder.add_node("call_model", call_model) [](#%5F%5Fcodelineno-32-33)builder.add_node("tools", tool_node) [](#%5F%5Fcodelineno-32-34) [](#%5F%5Fcodelineno-32-35)builder.add_edge(START, "call_model") [](#%5F%5Fcodelineno-32-36)builder.add_conditional_edges("call_model", should_continue, ["tools", END]) [](#%5F%5Fcodelineno-32-37)builder.add_edge("tools", "call_model") [](#%5F%5Fcodelineno-32-38) [](#%5F%5Fcodelineno-32-39)graph = builder.compile() [](#%5F%5Fcodelineno-32-40) [](#%5F%5Fcodelineno-32-41)graph.invoke({"messages": [{"role": "user", "content": "what's the weather in sf?"}]})

[](#%5F%5Fcodelineno-33-1){ [](#%5F%5Fcodelineno-33-2) 'messages': [ [](#%5F%5Fcodelineno-33-3) HumanMessage(content="what's the weather in sf?"), [](#%5F%5Fcodelineno-33-4) AIMessage( [](#%5F%5Fcodelineno-33-5) content=[{'text': "I'll help you check the weather in San Francisco right now.", 'type': 'text'}, {'id': 'toolu_01A4vwUEgBKxfFVc5H3v1CNs', 'input': {'location': 'San Francisco'}, 'name': 'get_weather', 'type': 'tool_use'}], [](#%5F%5Fcodelineno-33-6) tool_calls=[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01A4vwUEgBKxfFVc5H3v1CNs', 'type': 'tool_call'}] [](#%5F%5Fcodelineno-33-7) ), [](#%5F%5Fcodelineno-33-8) ToolMessage(content="It's 60 degrees and foggy."), [](#%5F%5Fcodelineno-33-9) AIMessage(content="The current weather in San Francisco is 60 degrees and foggy. Typical San Francisco weather with its famous marine layer!") [](#%5F%5Fcodelineno-33-10) ] [](#%5F%5Fcodelineno-33-11)}

Handle errors

By default, the ToolNode will catch all exceptions raised during tool calls and will return those as tool messages. To control how the errors are handled, you can use ToolNode's handle_tool_errors parameter:

Enable error handling (default)Disable error handlingCustom error handling

[](#%5F%5Fcodelineno-34-1)from langchain_core.messages import AIMessage [](#%5F%5Fcodelineno-34-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-34-3) [](#%5F%5Fcodelineno-34-4)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-34-5) """Multiply two numbers.""" [](#%5F%5Fcodelineno-34-6) if a == 42: [](#%5F%5Fcodelineno-34-7) raise ValueError("The ultimate error") [](#%5F%5Fcodelineno-34-8) return a * b [](#%5F%5Fcodelineno-34-9) [](#%5F%5Fcodelineno-34-10)tool_node = ToolNode([multiply]) [](#%5F%5Fcodelineno-34-11) [](#%5F%5Fcodelineno-34-12)# Run with error handling (default) [](#%5F%5Fcodelineno-34-13)message = AIMessage( [](#%5F%5Fcodelineno-34-14) content="", [](#%5F%5Fcodelineno-34-15) tool_calls=[ [](#%5F%5Fcodelineno-34-16) { [](#%5F%5Fcodelineno-34-17) "name": "multiply", [](#%5F%5Fcodelineno-34-18) "args": {"a": 42, "b": 7}, [](#%5F%5Fcodelineno-34-19) "id": "tool_call_id", [](#%5F%5Fcodelineno-34-20) "type": "tool_call", [](#%5F%5Fcodelineno-34-21) } [](#%5F%5Fcodelineno-34-22) ], [](#%5F%5Fcodelineno-34-23)) [](#%5F%5Fcodelineno-34-24) [](#%5F%5Fcodelineno-34-25)tool_node.invoke({"messages": [message]})

[](#%5F%5Fcodelineno-35-1){'messages': [ToolMessage(content="Error: ValueError('The ultimate error')\n Please fix your mistakes.", name='multiply', tool_call_id='tool_call_id', status='error')]}

[](#%5F%5Fcodelineno-36-1)from langchain_core.messages import AIMessage [](#%5F%5Fcodelineno-36-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-36-3) [](#%5F%5Fcodelineno-36-4)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-36-5) """Multiply two numbers.""" [](#%5F%5Fcodelineno-36-6) if a == 42: [](#%5F%5Fcodelineno-36-7) raise ValueError("The ultimate error") [](#%5F%5Fcodelineno-36-8) return a * b [](#%5F%5Fcodelineno-36-9) [](#%5F%5Fcodelineno-36-10)tool_node = ToolNode( [](#%5F%5Fcodelineno-36-11) [multiply], [](#%5F%5Fcodelineno-36-12) handle_tool_errors=False # (1)! [](#%5F%5Fcodelineno-36-13)) [](#%5F%5Fcodelineno-36-14)message = AIMessage( [](#%5F%5Fcodelineno-36-15) content="", [](#%5F%5Fcodelineno-36-16) tool_calls=[ [](#%5F%5Fcodelineno-36-17) { [](#%5F%5Fcodelineno-36-18) "name": "multiply", [](#%5F%5Fcodelineno-36-19) "args": {"a": 42, "b": 7}, [](#%5F%5Fcodelineno-36-20) "id": "tool_call_id", [](#%5F%5Fcodelineno-36-21) "type": "tool_call", [](#%5F%5Fcodelineno-36-22) } [](#%5F%5Fcodelineno-36-23) ], [](#%5F%5Fcodelineno-36-24)) [](#%5F%5Fcodelineno-36-25)tool_node.invoke({"messages": [message]})

  1. This disables error handling (enabled by default). See all available strategies in the [API reference](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.ToolNode " ToolNode").

[](#%5F%5Fcodelineno-37-1)from langchain_core.messages import AIMessage [](#%5F%5Fcodelineno-37-2)from langgraph.prebuilt import ToolNode [](#%5F%5Fcodelineno-37-3) [](#%5F%5Fcodelineno-37-4)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-37-5) """Multiply two numbers.""" [](#%5F%5Fcodelineno-37-6) if a == 42: [](#%5F%5Fcodelineno-37-7) raise ValueError("The ultimate error") [](#%5F%5Fcodelineno-37-8) return a * b [](#%5F%5Fcodelineno-37-9) [](#%5F%5Fcodelineno-37-10)tool_node = ToolNode( [](#%5F%5Fcodelineno-37-11) [multiply], [](#%5F%5Fcodelineno-37-12) handle_tool_errors=( [](#%5F%5Fcodelineno-37-13) "Can't use 42 as a first operand, you must switch operands!" # (1)! [](#%5F%5Fcodelineno-37-14) ) [](#%5F%5Fcodelineno-37-15)) [](#%5F%5Fcodelineno-37-16)tool_node.invoke({"messages": [message]})

  1. This provides a custom message to send to the LLM in case of an exception. See all available strategies in the [API reference](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.ToolNode " ToolNode").

[](#%5F%5Fcodelineno-38-1){'messages': [ToolMessage(content="Can't use 42 as a first operand, you must switch operands!", name='multiply', tool_call_id='tool_call_id', status='error')]}

See [API reference](../../reference/agents/#langgraph.prebuilt.tool%5Fnode.ToolNode " ToolNode") for more information on different tool error handling options.

As the number of available tools grows, you may want to limit the scope of the LLM's selection, to decrease token consumption and to help manage sources of error in LLM reasoning.

To address this, you can dynamically adjust the tools available to a model by retrieving relevant tools at runtime using semantic search.

See langgraph-bigtool prebuilt library for a ready-to-use implementation and this how-to guide for more details.