Workflows and Agents (original) (raw)

This guide reviews common patterns for agentic systems. In describing these systems, it can be useful to make a distinction between "workflows" and "agents". One way to think about this difference is nicely explained in Anthropic's Building Effective Agents blog post:

Workflows are systems where LLMs and tools are orchestrated through predefined code paths. Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

Here is a simple way to visualize these differences:

Agent Workflow

When building agents and workflows, LangGraph offers a number of benefits including persistence, streaming, and support for debugging as well as deployment.

Set up

You can use any chat model that supports structured outputs and tool calling. Below, we show the process of installing the packages, setting API keys, and testing structured outputs / tool calling for Anthropic.

Install dependencies

[](#%5F%5Fcodelineno-0-1)pip install langchain_core langchain-anthropic langgraph

Initialize an LLM

API Reference: ChatAnthropic

[](#%5F%5Fcodelineno-1-1)import os [](#%5F%5Fcodelineno-1-2)import getpass [](#%5F%5Fcodelineno-1-3) [](#%5F%5Fcodelineno-1-4)from langchain_anthropic import ChatAnthropic [](#%5F%5Fcodelineno-1-5) [](#%5F%5Fcodelineno-1-6)def _set_env(var: str): [](#%5F%5Fcodelineno-1-7) if not os.environ.get(var): [](#%5F%5Fcodelineno-1-8) os.environ[var] = getpass.getpass(f"{var}: ") [](#%5F%5Fcodelineno-1-9) [](#%5F%5Fcodelineno-1-10) [](#%5F%5Fcodelineno-1-11)_set_env("ANTHROPIC_API_KEY") [](#%5F%5Fcodelineno-1-12) [](#%5F%5Fcodelineno-1-13)llm = ChatAnthropic(model="claude-3-5-sonnet-latest")

Building Blocks: The Augmented LLM

LLM have augmentations that support building workflows and agents. These include structured outputs and tool calling, as shown in this image from the Anthropic blog on Building Effective Agents:

augmented_llm.png

[](#%5F%5Fcodelineno-2-1)# Schema for structured output [](#%5F%5Fcodelineno-2-2)from pydantic import BaseModel, Field [](#%5F%5Fcodelineno-2-3) [](#%5F%5Fcodelineno-2-4)class SearchQuery(BaseModel): [](#%5F%5Fcodelineno-2-5) search_query: str = Field(None, description="Query that is optimized web search.") [](#%5F%5Fcodelineno-2-6) justification: str = Field( [](#%5F%5Fcodelineno-2-7) None, description="Why this query is relevant to the user's request." [](#%5F%5Fcodelineno-2-8) ) [](#%5F%5Fcodelineno-2-9) [](#%5F%5Fcodelineno-2-10) [](#%5F%5Fcodelineno-2-11)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-2-12)structured_llm = llm.with_structured_output(SearchQuery) [](#%5F%5Fcodelineno-2-13) [](#%5F%5Fcodelineno-2-14)# Invoke the augmented LLM [](#%5F%5Fcodelineno-2-15)output = structured_llm.invoke("How does Calcium CT score relate to high cholesterol?") [](#%5F%5Fcodelineno-2-16) [](#%5F%5Fcodelineno-2-17)# Define a tool [](#%5F%5Fcodelineno-2-18)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-2-19) return a * b [](#%5F%5Fcodelineno-2-20) [](#%5F%5Fcodelineno-2-21)# Augment the LLM with tools [](#%5F%5Fcodelineno-2-22)llm_with_tools = llm.bind_tools([multiply]) [](#%5F%5Fcodelineno-2-23) [](#%5F%5Fcodelineno-2-24)# Invoke the LLM with input that triggers the tool call [](#%5F%5Fcodelineno-2-25)msg = llm_with_tools.invoke("What is 2 times 3?") [](#%5F%5Fcodelineno-2-26) [](#%5F%5Fcodelineno-2-27)# Get the tool call [](#%5F%5Fcodelineno-2-28)msg.tool_calls

Prompt chaining

In prompt chaining, each LLM call processes the output of the previous one.

As noted in the Anthropic blog on Building Effective Agents:

Prompt chaining decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one. You can add programmatic checks (see "gate” in the diagram below) on any intermediate steps to ensure that the process is still on track.

When to use this workflow: This workflow is ideal for situations where the task can be easily and cleanly decomposed into fixed subtasks. The main goal is to trade off latency for higher accuracy, by making each LLM call an easier task.

prompt_chain.png

Graph APIFunctional API

[](#%5F%5Fcodelineno-3-1)from typing_extensions import TypedDict [](#%5F%5Fcodelineno-3-2)from langgraph.graph import StateGraph, START, END [](#%5F%5Fcodelineno-3-3)from IPython.display import Image, display [](#%5F%5Fcodelineno-3-4) [](#%5F%5Fcodelineno-3-5) [](#%5F%5Fcodelineno-3-6)# Graph state [](#%5F%5Fcodelineno-3-7)class State(TypedDict): [](#%5F%5Fcodelineno-3-8) topic: str [](#%5F%5Fcodelineno-3-9) joke: str [](#%5F%5Fcodelineno-3-10) improved_joke: str [](#%5F%5Fcodelineno-3-11) final_joke: str [](#%5F%5Fcodelineno-3-12) [](#%5F%5Fcodelineno-3-13) [](#%5F%5Fcodelineno-3-14)# Nodes [](#%5F%5Fcodelineno-3-15)def generate_joke(state: State): [](#%5F%5Fcodelineno-3-16) """First LLM call to generate initial joke""" [](#%5F%5Fcodelineno-3-17) [](#%5F%5Fcodelineno-3-18) msg = llm.invoke(f"Write a short joke about {state['topic']}") [](#%5F%5Fcodelineno-3-19) return {"joke": msg.content} [](#%5F%5Fcodelineno-3-20) [](#%5F%5Fcodelineno-3-21) [](#%5F%5Fcodelineno-3-22)def check_punchline(state: State): [](#%5F%5Fcodelineno-3-23) """Gate function to check if the joke has a punchline""" [](#%5F%5Fcodelineno-3-24) [](#%5F%5Fcodelineno-3-25) # Simple check - does the joke contain "?" or "!" [](#%5F%5Fcodelineno-3-26) if "?" in state["joke"] or "!" in state["joke"]: [](#%5F%5Fcodelineno-3-27) return "Fail" [](#%5F%5Fcodelineno-3-28) return "Pass" [](#%5F%5Fcodelineno-3-29) [](#%5F%5Fcodelineno-3-30) [](#%5F%5Fcodelineno-3-31)def improve_joke(state: State): [](#%5F%5Fcodelineno-3-32) """Second LLM call to improve the joke""" [](#%5F%5Fcodelineno-3-33) [](#%5F%5Fcodelineno-3-34) msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}") [](#%5F%5Fcodelineno-3-35) return {"improved_joke": msg.content} [](#%5F%5Fcodelineno-3-36) [](#%5F%5Fcodelineno-3-37) [](#%5F%5Fcodelineno-3-38)def polish_joke(state: State): [](#%5F%5Fcodelineno-3-39) """Third LLM call for final polish""" [](#%5F%5Fcodelineno-3-40) [](#%5F%5Fcodelineno-3-41) msg = llm.invoke(f"Add a surprising twist to this joke: {state['improved_joke']}") [](#%5F%5Fcodelineno-3-42) return {"final_joke": msg.content} [](#%5F%5Fcodelineno-3-43) [](#%5F%5Fcodelineno-3-44) [](#%5F%5Fcodelineno-3-45)# Build workflow [](#%5F%5Fcodelineno-3-46)workflow = StateGraph(State) [](#%5F%5Fcodelineno-3-47) [](#%5F%5Fcodelineno-3-48)# Add nodes [](#%5F%5Fcodelineno-3-49)workflow.add_node("generate_joke", generate_joke) [](#%5F%5Fcodelineno-3-50)workflow.add_node("improve_joke", improve_joke) [](#%5F%5Fcodelineno-3-51)workflow.add_node("polish_joke", polish_joke) [](#%5F%5Fcodelineno-3-52) [](#%5F%5Fcodelineno-3-53)# Add edges to connect nodes [](#%5F%5Fcodelineno-3-54)workflow.add_edge(START, "generate_joke") [](#%5F%5Fcodelineno-3-55)workflow.add_conditional_edges( [](#%5F%5Fcodelineno-3-56) "generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END} [](#%5F%5Fcodelineno-3-57)) [](#%5F%5Fcodelineno-3-58)workflow.add_edge("improve_joke", "polish_joke") [](#%5F%5Fcodelineno-3-59)workflow.add_edge("polish_joke", END) [](#%5F%5Fcodelineno-3-60) [](#%5F%5Fcodelineno-3-61)# Compile [](#%5F%5Fcodelineno-3-62)chain = workflow.compile() [](#%5F%5Fcodelineno-3-63) [](#%5F%5Fcodelineno-3-64)# Show workflow [](#%5F%5Fcodelineno-3-65)display(Image(chain.get_graph().draw_mermaid_png())) [](#%5F%5Fcodelineno-3-66) [](#%5F%5Fcodelineno-3-67)# Invoke [](#%5F%5Fcodelineno-3-68)state = chain.invoke({"topic": "cats"}) [](#%5F%5Fcodelineno-3-69)print("Initial joke:") [](#%5F%5Fcodelineno-3-70)print(state["joke"]) [](#%5F%5Fcodelineno-3-71)print("\n--- --- ---\n") [](#%5F%5Fcodelineno-3-72)if "improved_joke" in state: [](#%5F%5Fcodelineno-3-73) print("Improved joke:") [](#%5F%5Fcodelineno-3-74) print(state["improved_joke"]) [](#%5F%5Fcodelineno-3-75) print("\n--- --- ---\n") [](#%5F%5Fcodelineno-3-76) [](#%5F%5Fcodelineno-3-77) print("Final joke:") [](#%5F%5Fcodelineno-3-78) print(state["final_joke"]) [](#%5F%5Fcodelineno-3-79)else: [](#%5F%5Fcodelineno-3-80) print("Joke failed quality gate - no punchline detected!")

LangSmith Trace

https://smith.langchain.com/public/a0281fca-3a71-46de-beee-791468607b75/r

Resources:

LangChain Academy

See our lesson on Prompt Chaining here.

[](#%5F%5Fcodelineno-4-1)from langgraph.func import entrypoint, task [](#%5F%5Fcodelineno-4-2) [](#%5F%5Fcodelineno-4-3) [](#%5F%5Fcodelineno-4-4)# Tasks [](#%5F%5Fcodelineno-4-5)@task [](#%5F%5Fcodelineno-4-6)def generate_joke(topic: str): [](#%5F%5Fcodelineno-4-7) """First LLM call to generate initial joke""" [](#%5F%5Fcodelineno-4-8) msg = llm.invoke(f"Write a short joke about {topic}") [](#%5F%5Fcodelineno-4-9) return msg.content [](#%5F%5Fcodelineno-4-10) [](#%5F%5Fcodelineno-4-11) [](#%5F%5Fcodelineno-4-12)def check_punchline(joke: str): [](#%5F%5Fcodelineno-4-13) """Gate function to check if the joke has a punchline""" [](#%5F%5Fcodelineno-4-14) # Simple check - does the joke contain "?" or "!" [](#%5F%5Fcodelineno-4-15) if "?" in joke or "!" in joke: [](#%5F%5Fcodelineno-4-16) return "Fail" [](#%5F%5Fcodelineno-4-17) [](#%5F%5Fcodelineno-4-18) return "Pass" [](#%5F%5Fcodelineno-4-19) [](#%5F%5Fcodelineno-4-20) [](#%5F%5Fcodelineno-4-21)@task [](#%5F%5Fcodelineno-4-22)def improve_joke(joke: str): [](#%5F%5Fcodelineno-4-23) """Second LLM call to improve the joke""" [](#%5F%5Fcodelineno-4-24) msg = llm.invoke(f"Make this joke funnier by adding wordplay: {joke}") [](#%5F%5Fcodelineno-4-25) return msg.content [](#%5F%5Fcodelineno-4-26) [](#%5F%5Fcodelineno-4-27) [](#%5F%5Fcodelineno-4-28)@task [](#%5F%5Fcodelineno-4-29)def polish_joke(joke: str): [](#%5F%5Fcodelineno-4-30) """Third LLM call for final polish""" [](#%5F%5Fcodelineno-4-31) msg = llm.invoke(f"Add a surprising twist to this joke: {joke}") [](#%5F%5Fcodelineno-4-32) return msg.content [](#%5F%5Fcodelineno-4-33) [](#%5F%5Fcodelineno-4-34) [](#%5F%5Fcodelineno-4-35)@entrypoint() [](#%5F%5Fcodelineno-4-36)def prompt_chaining_workflow(topic: str): [](#%5F%5Fcodelineno-4-37) original_joke = generate_joke(topic).result() [](#%5F%5Fcodelineno-4-38) if check_punchline(original_joke) == "Pass": [](#%5F%5Fcodelineno-4-39) return original_joke [](#%5F%5Fcodelineno-4-40) [](#%5F%5Fcodelineno-4-41) improved_joke = improve_joke(original_joke).result() [](#%5F%5Fcodelineno-4-42) return polish_joke(improved_joke).result() [](#%5F%5Fcodelineno-4-43) [](#%5F%5Fcodelineno-4-44)# Invoke [](#%5F%5Fcodelineno-4-45)for step in prompt_chaining_workflow.stream("cats", stream_mode="updates"): [](#%5F%5Fcodelineno-4-46) print(step) [](#%5F%5Fcodelineno-4-47) print("\n")

LangSmith Trace

https://smith.langchain.com/public/332fa4fc-b6ca-416e-baa3-161625e69163/r

Parallelization

With parallelization, LLMs work simultaneously on a task:

LLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically. This workflow, parallelization, manifests in two key variations: Sectioning: Breaking a task into independent subtasks run in parallel. Voting: Running the same task multiple times to get diverse outputs.

When to use this workflow: Parallelization is effective when the divided subtasks can be parallelized for speed, or when multiple perspectives or attempts are needed for higher confidence results. For complex tasks with multiple considerations, LLMs generally perform better when each consideration is handled by a separate LLM call, allowing focused attention on each specific aspect.

parallelization.png

Graph APIFunctional API

[](#%5F%5Fcodelineno-5-1)# Graph state [](#%5F%5Fcodelineno-5-2)class State(TypedDict): [](#%5F%5Fcodelineno-5-3) topic: str [](#%5F%5Fcodelineno-5-4) joke: str [](#%5F%5Fcodelineno-5-5) story: str [](#%5F%5Fcodelineno-5-6) poem: str [](#%5F%5Fcodelineno-5-7) combined_output: str [](#%5F%5Fcodelineno-5-8) [](#%5F%5Fcodelineno-5-9) [](#%5F%5Fcodelineno-5-10)# Nodes [](#%5F%5Fcodelineno-5-11)def call_llm_1(state: State): [](#%5F%5Fcodelineno-5-12) """First LLM call to generate initial joke""" [](#%5F%5Fcodelineno-5-13) [](#%5F%5Fcodelineno-5-14) msg = llm.invoke(f"Write a joke about {state['topic']}") [](#%5F%5Fcodelineno-5-15) return {"joke": msg.content} [](#%5F%5Fcodelineno-5-16) [](#%5F%5Fcodelineno-5-17) [](#%5F%5Fcodelineno-5-18)def call_llm_2(state: State): [](#%5F%5Fcodelineno-5-19) """Second LLM call to generate story""" [](#%5F%5Fcodelineno-5-20) [](#%5F%5Fcodelineno-5-21) msg = llm.invoke(f"Write a story about {state['topic']}") [](#%5F%5Fcodelineno-5-22) return {"story": msg.content} [](#%5F%5Fcodelineno-5-23) [](#%5F%5Fcodelineno-5-24) [](#%5F%5Fcodelineno-5-25)def call_llm_3(state: State): [](#%5F%5Fcodelineno-5-26) """Third LLM call to generate poem""" [](#%5F%5Fcodelineno-5-27) [](#%5F%5Fcodelineno-5-28) msg = llm.invoke(f"Write a poem about {state['topic']}") [](#%5F%5Fcodelineno-5-29) return {"poem": msg.content} [](#%5F%5Fcodelineno-5-30) [](#%5F%5Fcodelineno-5-31) [](#%5F%5Fcodelineno-5-32)def aggregator(state: State): [](#%5F%5Fcodelineno-5-33) """Combine the joke and story into a single output""" [](#%5F%5Fcodelineno-5-34) [](#%5F%5Fcodelineno-5-35) combined = f"Here's a story, joke, and poem about {state['topic']}!\n\n" [](#%5F%5Fcodelineno-5-36) combined += f"STORY:\n{state['story']}\n\n" [](#%5F%5Fcodelineno-5-37) combined += f"JOKE:\n{state['joke']}\n\n" [](#%5F%5Fcodelineno-5-38) combined += f"POEM:\n{state['poem']}" [](#%5F%5Fcodelineno-5-39) return {"combined_output": combined} [](#%5F%5Fcodelineno-5-40) [](#%5F%5Fcodelineno-5-41) [](#%5F%5Fcodelineno-5-42)# Build workflow [](#%5F%5Fcodelineno-5-43)parallel_builder = StateGraph(State) [](#%5F%5Fcodelineno-5-44) [](#%5F%5Fcodelineno-5-45)# Add nodes [](#%5F%5Fcodelineno-5-46)parallel_builder.add_node("call_llm_1", call_llm_1) [](#%5F%5Fcodelineno-5-47)parallel_builder.add_node("call_llm_2", call_llm_2) [](#%5F%5Fcodelineno-5-48)parallel_builder.add_node("call_llm_3", call_llm_3) [](#%5F%5Fcodelineno-5-49)parallel_builder.add_node("aggregator", aggregator) [](#%5F%5Fcodelineno-5-50) [](#%5F%5Fcodelineno-5-51)# Add edges to connect nodes [](#%5F%5Fcodelineno-5-52)parallel_builder.add_edge(START, "call_llm_1") [](#%5F%5Fcodelineno-5-53)parallel_builder.add_edge(START, "call_llm_2") [](#%5F%5Fcodelineno-5-54)parallel_builder.add_edge(START, "call_llm_3") [](#%5F%5Fcodelineno-5-55)parallel_builder.add_edge("call_llm_1", "aggregator") [](#%5F%5Fcodelineno-5-56)parallel_builder.add_edge("call_llm_2", "aggregator") [](#%5F%5Fcodelineno-5-57)parallel_builder.add_edge("call_llm_3", "aggregator") [](#%5F%5Fcodelineno-5-58)parallel_builder.add_edge("aggregator", END) [](#%5F%5Fcodelineno-5-59)parallel_workflow = parallel_builder.compile() [](#%5F%5Fcodelineno-5-60) [](#%5F%5Fcodelineno-5-61)# Show workflow [](#%5F%5Fcodelineno-5-62)display(Image(parallel_workflow.get_graph().draw_mermaid_png())) [](#%5F%5Fcodelineno-5-63) [](#%5F%5Fcodelineno-5-64)# Invoke [](#%5F%5Fcodelineno-5-65)state = parallel_workflow.invoke({"topic": "cats"}) [](#%5F%5Fcodelineno-5-66)print(state["combined_output"])

LangSmith Trace

https://smith.langchain.com/public/3be2e53c-ca94-40dd-934f-82ff87fac277/r

Resources:

Documentation

See our documentation on parallelization here.

LangChain Academy

See our lesson on parallelization here.

[](#%5F%5Fcodelineno-6-1)@task [](#%5F%5Fcodelineno-6-2)def call_llm_1(topic: str): [](#%5F%5Fcodelineno-6-3) """First LLM call to generate initial joke""" [](#%5F%5Fcodelineno-6-4) msg = llm.invoke(f"Write a joke about {topic}") [](#%5F%5Fcodelineno-6-5) return msg.content [](#%5F%5Fcodelineno-6-6) [](#%5F%5Fcodelineno-6-7) [](#%5F%5Fcodelineno-6-8)@task [](#%5F%5Fcodelineno-6-9)def call_llm_2(topic: str): [](#%5F%5Fcodelineno-6-10) """Second LLM call to generate story""" [](#%5F%5Fcodelineno-6-11) msg = llm.invoke(f"Write a story about {topic}") [](#%5F%5Fcodelineno-6-12) return msg.content [](#%5F%5Fcodelineno-6-13) [](#%5F%5Fcodelineno-6-14) [](#%5F%5Fcodelineno-6-15)@task [](#%5F%5Fcodelineno-6-16)def call_llm_3(topic): [](#%5F%5Fcodelineno-6-17) """Third LLM call to generate poem""" [](#%5F%5Fcodelineno-6-18) msg = llm.invoke(f"Write a poem about {topic}") [](#%5F%5Fcodelineno-6-19) return msg.content [](#%5F%5Fcodelineno-6-20) [](#%5F%5Fcodelineno-6-21) [](#%5F%5Fcodelineno-6-22)@task [](#%5F%5Fcodelineno-6-23)def aggregator(topic, joke, story, poem): [](#%5F%5Fcodelineno-6-24) """Combine the joke and story into a single output""" [](#%5F%5Fcodelineno-6-25) [](#%5F%5Fcodelineno-6-26) combined = f"Here's a story, joke, and poem about {topic}!\n\n" [](#%5F%5Fcodelineno-6-27) combined += f"STORY:\n{story}\n\n" [](#%5F%5Fcodelineno-6-28) combined += f"JOKE:\n{joke}\n\n" [](#%5F%5Fcodelineno-6-29) combined += f"POEM:\n{poem}" [](#%5F%5Fcodelineno-6-30) return combined [](#%5F%5Fcodelineno-6-31) [](#%5F%5Fcodelineno-6-32) [](#%5F%5Fcodelineno-6-33)# Build workflow [](#%5F%5Fcodelineno-6-34)@entrypoint() [](#%5F%5Fcodelineno-6-35)def parallel_workflow(topic: str): [](#%5F%5Fcodelineno-6-36) joke_fut = call_llm_1(topic) [](#%5F%5Fcodelineno-6-37) story_fut = call_llm_2(topic) [](#%5F%5Fcodelineno-6-38) poem_fut = call_llm_3(topic) [](#%5F%5Fcodelineno-6-39) return aggregator( [](#%5F%5Fcodelineno-6-40) topic, joke_fut.result(), story_fut.result(), poem_fut.result() [](#%5F%5Fcodelineno-6-41) ).result() [](#%5F%5Fcodelineno-6-42) [](#%5F%5Fcodelineno-6-43)# Invoke [](#%5F%5Fcodelineno-6-44)for step in parallel_workflow.stream("cats", stream_mode="updates"): [](#%5F%5Fcodelineno-6-45) print(step) [](#%5F%5Fcodelineno-6-46) print("\n")

LangSmith Trace

https://smith.langchain.com/public/623d033f-e814-41e9-80b1-75e6abb67801/r

Routing

Routing classifies an input and directs it to a followup task. As noted in the Anthropic blog on Building Effective Agents:

Routing classifies an input and directs it to a specialized followup task. This workflow allows for separation of concerns, and building more specialized prompts. Without this workflow, optimizing for one kind of input can hurt performance on other inputs.

When to use this workflow: Routing works well for complex tasks where there are distinct categories that are better handled separately, and where classification can be handled accurately, either by an LLM or a more traditional classification model/algorithm.

routing.png

Graph APIFunctional API

[](#%5F%5Fcodelineno-7-1)from typing_extensions import Literal [](#%5F%5Fcodelineno-7-2)from langchain_core.messages import HumanMessage, SystemMessage [](#%5F%5Fcodelineno-7-3) [](#%5F%5Fcodelineno-7-4) [](#%5F%5Fcodelineno-7-5)# Schema for structured output to use as routing logic [](#%5F%5Fcodelineno-7-6)class Route(BaseModel): [](#%5F%5Fcodelineno-7-7) step: Literal["poem", "story", "joke"] = Field( [](#%5F%5Fcodelineno-7-8) None, description="The next step in the routing process" [](#%5F%5Fcodelineno-7-9) ) [](#%5F%5Fcodelineno-7-10) [](#%5F%5Fcodelineno-7-11) [](#%5F%5Fcodelineno-7-12)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-7-13)router = llm.with_structured_output(Route) [](#%5F%5Fcodelineno-7-14) [](#%5F%5Fcodelineno-7-15) [](#%5F%5Fcodelineno-7-16)# State [](#%5F%5Fcodelineno-7-17)class State(TypedDict): [](#%5F%5Fcodelineno-7-18) input: str [](#%5F%5Fcodelineno-7-19) decision: str [](#%5F%5Fcodelineno-7-20) output: str [](#%5F%5Fcodelineno-7-21) [](#%5F%5Fcodelineno-7-22) [](#%5F%5Fcodelineno-7-23)# Nodes [](#%5F%5Fcodelineno-7-24)def llm_call_1(state: State): [](#%5F%5Fcodelineno-7-25) """Write a story""" [](#%5F%5Fcodelineno-7-26) [](#%5F%5Fcodelineno-7-27) result = llm.invoke(state["input"]) [](#%5F%5Fcodelineno-7-28) return {"output": result.content} [](#%5F%5Fcodelineno-7-29) [](#%5F%5Fcodelineno-7-30) [](#%5F%5Fcodelineno-7-31)def llm_call_2(state: State): [](#%5F%5Fcodelineno-7-32) """Write a joke""" [](#%5F%5Fcodelineno-7-33) [](#%5F%5Fcodelineno-7-34) result = llm.invoke(state["input"]) [](#%5F%5Fcodelineno-7-35) return {"output": result.content} [](#%5F%5Fcodelineno-7-36) [](#%5F%5Fcodelineno-7-37) [](#%5F%5Fcodelineno-7-38)def llm_call_3(state: State): [](#%5F%5Fcodelineno-7-39) """Write a poem""" [](#%5F%5Fcodelineno-7-40) [](#%5F%5Fcodelineno-7-41) result = llm.invoke(state["input"]) [](#%5F%5Fcodelineno-7-42) return {"output": result.content} [](#%5F%5Fcodelineno-7-43) [](#%5F%5Fcodelineno-7-44) [](#%5F%5Fcodelineno-7-45)def llm_call_router(state: State): [](#%5F%5Fcodelineno-7-46) """Route the input to the appropriate node""" [](#%5F%5Fcodelineno-7-47) [](#%5F%5Fcodelineno-7-48) # Run the augmented LLM with structured output to serve as routing logic [](#%5F%5Fcodelineno-7-49) decision = router.invoke( [](#%5F%5Fcodelineno-7-50) [ [](#%5F%5Fcodelineno-7-51) SystemMessage( [](#%5F%5Fcodelineno-7-52) content="Route the input to story, joke, or poem based on the user's request." [](#%5F%5Fcodelineno-7-53) ), [](#%5F%5Fcodelineno-7-54) HumanMessage(content=state["input"]), [](#%5F%5Fcodelineno-7-55) ] [](#%5F%5Fcodelineno-7-56) ) [](#%5F%5Fcodelineno-7-57) [](#%5F%5Fcodelineno-7-58) return {"decision": decision.step} [](#%5F%5Fcodelineno-7-59) [](#%5F%5Fcodelineno-7-60) [](#%5F%5Fcodelineno-7-61)# Conditional edge function to route to the appropriate node [](#%5F%5Fcodelineno-7-62)def route_decision(state: State): [](#%5F%5Fcodelineno-7-63) # Return the node name you want to visit next [](#%5F%5Fcodelineno-7-64) if state["decision"] == "story": [](#%5F%5Fcodelineno-7-65) return "llm_call_1" [](#%5F%5Fcodelineno-7-66) elif state["decision"] == "joke": [](#%5F%5Fcodelineno-7-67) return "llm_call_2" [](#%5F%5Fcodelineno-7-68) elif state["decision"] == "poem": [](#%5F%5Fcodelineno-7-69) return "llm_call_3" [](#%5F%5Fcodelineno-7-70) [](#%5F%5Fcodelineno-7-71) [](#%5F%5Fcodelineno-7-72)# Build workflow [](#%5F%5Fcodelineno-7-73)router_builder = StateGraph(State) [](#%5F%5Fcodelineno-7-74) [](#%5F%5Fcodelineno-7-75)# Add nodes [](#%5F%5Fcodelineno-7-76)router_builder.add_node("llm_call_1", llm_call_1) [](#%5F%5Fcodelineno-7-77)router_builder.add_node("llm_call_2", llm_call_2) [](#%5F%5Fcodelineno-7-78)router_builder.add_node("llm_call_3", llm_call_3) [](#%5F%5Fcodelineno-7-79)router_builder.add_node("llm_call_router", llm_call_router) [](#%5F%5Fcodelineno-7-80) [](#%5F%5Fcodelineno-7-81)# Add edges to connect nodes [](#%5F%5Fcodelineno-7-82)router_builder.add_edge(START, "llm_call_router") [](#%5F%5Fcodelineno-7-83)router_builder.add_conditional_edges( [](#%5F%5Fcodelineno-7-84) "llm_call_router", [](#%5F%5Fcodelineno-7-85) route_decision, [](#%5F%5Fcodelineno-7-86) { # Name returned by route_decision : Name of next node to visit [](#%5F%5Fcodelineno-7-87) "llm_call_1": "llm_call_1", [](#%5F%5Fcodelineno-7-88) "llm_call_2": "llm_call_2", [](#%5F%5Fcodelineno-7-89) "llm_call_3": "llm_call_3", [](#%5F%5Fcodelineno-7-90) }, [](#%5F%5Fcodelineno-7-91)) [](#%5F%5Fcodelineno-7-92)router_builder.add_edge("llm_call_1", END) [](#%5F%5Fcodelineno-7-93)router_builder.add_edge("llm_call_2", END) [](#%5F%5Fcodelineno-7-94)router_builder.add_edge("llm_call_3", END) [](#%5F%5Fcodelineno-7-95) [](#%5F%5Fcodelineno-7-96)# Compile workflow [](#%5F%5Fcodelineno-7-97)router_workflow = router_builder.compile() [](#%5F%5Fcodelineno-7-98) [](#%5F%5Fcodelineno-7-99)# Show the workflow [](#%5F%5Fcodelineno-7-100)display(Image(router_workflow.get_graph().draw_mermaid_png())) [](#%5F%5Fcodelineno-7-101) [](#%5F%5Fcodelineno-7-102)# Invoke [](#%5F%5Fcodelineno-7-103)state = router_workflow.invoke({"input": "Write me a joke about cats"}) [](#%5F%5Fcodelineno-7-104)print(state["output"])

LangSmith Trace

https://smith.langchain.com/public/c4580b74-fe91-47e4-96fe-7fac598d509c/r

Resources:

LangChain Academy

See our lesson on routing here.

Examples

Here is RAG workflow that routes questions. See our video here.

[](#%5F%5Fcodelineno-8-1)from typing_extensions import Literal [](#%5F%5Fcodelineno-8-2)from pydantic import BaseModel [](#%5F%5Fcodelineno-8-3)from langchain_core.messages import HumanMessage, SystemMessage [](#%5F%5Fcodelineno-8-4) [](#%5F%5Fcodelineno-8-5) [](#%5F%5Fcodelineno-8-6)# Schema for structured output to use as routing logic [](#%5F%5Fcodelineno-8-7)class Route(BaseModel): [](#%5F%5Fcodelineno-8-8) step: Literal["poem", "story", "joke"] = Field( [](#%5F%5Fcodelineno-8-9) None, description="The next step in the routing process" [](#%5F%5Fcodelineno-8-10) ) [](#%5F%5Fcodelineno-8-11) [](#%5F%5Fcodelineno-8-12) [](#%5F%5Fcodelineno-8-13)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-8-14)router = llm.with_structured_output(Route) [](#%5F%5Fcodelineno-8-15) [](#%5F%5Fcodelineno-8-16) [](#%5F%5Fcodelineno-8-17)@task [](#%5F%5Fcodelineno-8-18)def llm_call_1(input_: str): [](#%5F%5Fcodelineno-8-19) """Write a story""" [](#%5F%5Fcodelineno-8-20) result = llm.invoke(input_) [](#%5F%5Fcodelineno-8-21) return result.content [](#%5F%5Fcodelineno-8-22) [](#%5F%5Fcodelineno-8-23) [](#%5F%5Fcodelineno-8-24)@task [](#%5F%5Fcodelineno-8-25)def llm_call_2(input_: str): [](#%5F%5Fcodelineno-8-26) """Write a joke""" [](#%5F%5Fcodelineno-8-27) result = llm.invoke(input_) [](#%5F%5Fcodelineno-8-28) return result.content [](#%5F%5Fcodelineno-8-29) [](#%5F%5Fcodelineno-8-30) [](#%5F%5Fcodelineno-8-31)@task [](#%5F%5Fcodelineno-8-32)def llm_call_3(input_: str): [](#%5F%5Fcodelineno-8-33) """Write a poem""" [](#%5F%5Fcodelineno-8-34) result = llm.invoke(input_) [](#%5F%5Fcodelineno-8-35) return result.content [](#%5F%5Fcodelineno-8-36) [](#%5F%5Fcodelineno-8-37) [](#%5F%5Fcodelineno-8-38)def llm_call_router(input_: str): [](#%5F%5Fcodelineno-8-39) """Route the input to the appropriate node""" [](#%5F%5Fcodelineno-8-40) # Run the augmented LLM with structured output to serve as routing logic [](#%5F%5Fcodelineno-8-41) decision = router.invoke( [](#%5F%5Fcodelineno-8-42) [ [](#%5F%5Fcodelineno-8-43) SystemMessage( [](#%5F%5Fcodelineno-8-44) content="Route the input to story, joke, or poem based on the user's request." [](#%5F%5Fcodelineno-8-45) ), [](#%5F%5Fcodelineno-8-46) HumanMessage(content=input_), [](#%5F%5Fcodelineno-8-47) ] [](#%5F%5Fcodelineno-8-48) ) [](#%5F%5Fcodelineno-8-49) return decision.step [](#%5F%5Fcodelineno-8-50) [](#%5F%5Fcodelineno-8-51) [](#%5F%5Fcodelineno-8-52)# Create workflow [](#%5F%5Fcodelineno-8-53)@entrypoint() [](#%5F%5Fcodelineno-8-54)def router_workflow(input_: str): [](#%5F%5Fcodelineno-8-55) next_step = llm_call_router(input_) [](#%5F%5Fcodelineno-8-56) if next_step == "story": [](#%5F%5Fcodelineno-8-57) llm_call = llm_call_1 [](#%5F%5Fcodelineno-8-58) elif next_step == "joke": [](#%5F%5Fcodelineno-8-59) llm_call = llm_call_2 [](#%5F%5Fcodelineno-8-60) elif next_step == "poem": [](#%5F%5Fcodelineno-8-61) llm_call = llm_call_3 [](#%5F%5Fcodelineno-8-62) [](#%5F%5Fcodelineno-8-63) return llm_call(input_).result() [](#%5F%5Fcodelineno-8-64) [](#%5F%5Fcodelineno-8-65)# Invoke [](#%5F%5Fcodelineno-8-66)for step in router_workflow.stream("Write me a joke about cats", stream_mode="updates"): [](#%5F%5Fcodelineno-8-67) print(step) [](#%5F%5Fcodelineno-8-68) print("\n")

LangSmith Trace

https://smith.langchain.com/public/5e2eb979-82dd-402c-b1a0-a8cceaf2a28a/r

Orchestrator-Worker

With orchestrator-worker, an orchestrator breaks down a task and delegates each sub-task to workers. As noted in the Anthropic blog on Building Effective Agents:

In the orchestrator-workers workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.

When to use this workflow: This workflow is well-suited for complex tasks where you can’t predict the subtasks needed (in coding, for example, the number of files that need to be changed and the nature of the change in each file likely depend on the task). Whereas it’s topographically similar, the key difference from parallelization is its flexibility—subtasks aren't pre-defined, but determined by the orchestrator based on the specific input.

worker.png

Graph APIFunctional API

[](#%5F%5Fcodelineno-9-1)from typing import Annotated, List [](#%5F%5Fcodelineno-9-2)import operator [](#%5F%5Fcodelineno-9-3) [](#%5F%5Fcodelineno-9-4) [](#%5F%5Fcodelineno-9-5)# Schema for structured output to use in planning [](#%5F%5Fcodelineno-9-6)class Section(BaseModel): [](#%5F%5Fcodelineno-9-7) name: str = Field( [](#%5F%5Fcodelineno-9-8) description="Name for this section of the report.", [](#%5F%5Fcodelineno-9-9) ) [](#%5F%5Fcodelineno-9-10) description: str = Field( [](#%5F%5Fcodelineno-9-11) description="Brief overview of the main topics and concepts to be covered in this section.", [](#%5F%5Fcodelineno-9-12) ) [](#%5F%5Fcodelineno-9-13) [](#%5F%5Fcodelineno-9-14) [](#%5F%5Fcodelineno-9-15)class Sections(BaseModel): [](#%5F%5Fcodelineno-9-16) sections: List[Section] = Field( [](#%5F%5Fcodelineno-9-17) description="Sections of the report.", [](#%5F%5Fcodelineno-9-18) ) [](#%5F%5Fcodelineno-9-19) [](#%5F%5Fcodelineno-9-20) [](#%5F%5Fcodelineno-9-21)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-9-22)planner = llm.with_structured_output(Sections)

Creating Workers in LangGraph

Because orchestrator-worker workflows are common, LangGraph has the Send API to support this. It lets you dynamically create worker nodes and send each one a specific input. Each worker has its own state, and all worker outputs are written to a shared state key that is accessible to the orchestrator graph. This gives the orchestrator access to all worker output and allows it to synthesize them into a final output. As you can see below, we iterate over a list of sections and Send each to a worker node. See further documentation here and here.

[](#%5F%5Fcodelineno-10-1)from langgraph.constants import Send [](#%5F%5Fcodelineno-10-2) [](#%5F%5Fcodelineno-10-3) [](#%5F%5Fcodelineno-10-4)# Graph state [](#%5F%5Fcodelineno-10-5)class State(TypedDict): [](#%5F%5Fcodelineno-10-6) topic: str # Report topic [](#%5F%5Fcodelineno-10-7) sections: list[Section] # List of report sections [](#%5F%5Fcodelineno-10-8) completed_sections: Annotated[ [](#%5F%5Fcodelineno-10-9) list, operator.add [](#%5F%5Fcodelineno-10-10) ] # All workers write to this key in parallel [](#%5F%5Fcodelineno-10-11) final_report: str # Final report [](#%5F%5Fcodelineno-10-12) [](#%5F%5Fcodelineno-10-13) [](#%5F%5Fcodelineno-10-14)# Worker state [](#%5F%5Fcodelineno-10-15)class WorkerState(TypedDict): [](#%5F%5Fcodelineno-10-16) section: Section [](#%5F%5Fcodelineno-10-17) completed_sections: Annotated[list, operator.add] [](#%5F%5Fcodelineno-10-18) [](#%5F%5Fcodelineno-10-19) [](#%5F%5Fcodelineno-10-20)# Nodes [](#%5F%5Fcodelineno-10-21)def orchestrator(state: State): [](#%5F%5Fcodelineno-10-22) """Orchestrator that generates a plan for the report""" [](#%5F%5Fcodelineno-10-23) [](#%5F%5Fcodelineno-10-24) # Generate queries [](#%5F%5Fcodelineno-10-25) report_sections = planner.invoke( [](#%5F%5Fcodelineno-10-26) [ [](#%5F%5Fcodelineno-10-27) SystemMessage(content="Generate a plan for the report."), [](#%5F%5Fcodelineno-10-28) HumanMessage(content=f"Here is the report topic: {state['topic']}"), [](#%5F%5Fcodelineno-10-29) ] [](#%5F%5Fcodelineno-10-30) ) [](#%5F%5Fcodelineno-10-31) [](#%5F%5Fcodelineno-10-32) return {"sections": report_sections.sections} [](#%5F%5Fcodelineno-10-33) [](#%5F%5Fcodelineno-10-34) [](#%5F%5Fcodelineno-10-35)def llm_call(state: WorkerState): [](#%5F%5Fcodelineno-10-36) """Worker writes a section of the report""" [](#%5F%5Fcodelineno-10-37) [](#%5F%5Fcodelineno-10-38) # Generate section [](#%5F%5Fcodelineno-10-39) section = llm.invoke( [](#%5F%5Fcodelineno-10-40) [ [](#%5F%5Fcodelineno-10-41) SystemMessage( [](#%5F%5Fcodelineno-10-42) content="Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting." [](#%5F%5Fcodelineno-10-43) ), [](#%5F%5Fcodelineno-10-44) HumanMessage( [](#%5F%5Fcodelineno-10-45) content=f"Here is the section name: {state['section'].name} and description: {state['section'].description}" [](#%5F%5Fcodelineno-10-46) ), [](#%5F%5Fcodelineno-10-47) ] [](#%5F%5Fcodelineno-10-48) ) [](#%5F%5Fcodelineno-10-49) [](#%5F%5Fcodelineno-10-50) # Write the updated section to completed sections [](#%5F%5Fcodelineno-10-51) return {"completed_sections": [section.content]} [](#%5F%5Fcodelineno-10-52) [](#%5F%5Fcodelineno-10-53) [](#%5F%5Fcodelineno-10-54)def synthesizer(state: State): [](#%5F%5Fcodelineno-10-55) """Synthesize full report from sections""" [](#%5F%5Fcodelineno-10-56) [](#%5F%5Fcodelineno-10-57) # List of completed sections [](#%5F%5Fcodelineno-10-58) completed_sections = state["completed_sections"] [](#%5F%5Fcodelineno-10-59) [](#%5F%5Fcodelineno-10-60) # Format completed section to str to use as context for final sections [](#%5F%5Fcodelineno-10-61) completed_report_sections = "\n\n---\n\n".join(completed_sections) [](#%5F%5Fcodelineno-10-62) [](#%5F%5Fcodelineno-10-63) return {"final_report": completed_report_sections} [](#%5F%5Fcodelineno-10-64) [](#%5F%5Fcodelineno-10-65) [](#%5F%5Fcodelineno-10-66)# Conditional edge function to create llm_call workers that each write a section of the report [](#%5F%5Fcodelineno-10-67)def assign_workers(state: State): [](#%5F%5Fcodelineno-10-68) """Assign a worker to each section in the plan""" [](#%5F%5Fcodelineno-10-69) [](#%5F%5Fcodelineno-10-70) # Kick off section writing in parallel via Send() API [](#%5F%5Fcodelineno-10-71) return [Send("llm_call", {"section": s}) for s in state["sections"]] [](#%5F%5Fcodelineno-10-72) [](#%5F%5Fcodelineno-10-73) [](#%5F%5Fcodelineno-10-74)# Build workflow [](#%5F%5Fcodelineno-10-75)orchestrator_worker_builder = StateGraph(State) [](#%5F%5Fcodelineno-10-76) [](#%5F%5Fcodelineno-10-77)# Add the nodes [](#%5F%5Fcodelineno-10-78)orchestrator_worker_builder.add_node("orchestrator", orchestrator) [](#%5F%5Fcodelineno-10-79)orchestrator_worker_builder.add_node("llm_call", llm_call) [](#%5F%5Fcodelineno-10-80)orchestrator_worker_builder.add_node("synthesizer", synthesizer) [](#%5F%5Fcodelineno-10-81) [](#%5F%5Fcodelineno-10-82)# Add edges to connect nodes [](#%5F%5Fcodelineno-10-83)orchestrator_worker_builder.add_edge(START, "orchestrator") [](#%5F%5Fcodelineno-10-84)orchestrator_worker_builder.add_conditional_edges( [](#%5F%5Fcodelineno-10-85) "orchestrator", assign_workers, ["llm_call"] [](#%5F%5Fcodelineno-10-86)) [](#%5F%5Fcodelineno-10-87)orchestrator_worker_builder.add_edge("llm_call", "synthesizer") [](#%5F%5Fcodelineno-10-88)orchestrator_worker_builder.add_edge("synthesizer", END) [](#%5F%5Fcodelineno-10-89) [](#%5F%5Fcodelineno-10-90)# Compile the workflow [](#%5F%5Fcodelineno-10-91)orchestrator_worker = orchestrator_worker_builder.compile() [](#%5F%5Fcodelineno-10-92) [](#%5F%5Fcodelineno-10-93)# Show the workflow [](#%5F%5Fcodelineno-10-94)display(Image(orchestrator_worker.get_graph().draw_mermaid_png())) [](#%5F%5Fcodelineno-10-95) [](#%5F%5Fcodelineno-10-96)# Invoke [](#%5F%5Fcodelineno-10-97)state = orchestrator_worker.invoke({"topic": "Create a report on LLM scaling laws"}) [](#%5F%5Fcodelineno-10-98) [](#%5F%5Fcodelineno-10-99)from IPython.display import Markdown [](#%5F%5Fcodelineno-10-100)Markdown(state["final_report"])

LangSmith Trace

https://smith.langchain.com/public/78cbcfc3-38bf-471d-b62a-b299b144237d/r

Resources:

LangChain Academy

See our lesson on orchestrator-worker here.

Examples

Here is a project that uses orchestrator-worker for report planning and writing. See our video here.

[](#%5F%5Fcodelineno-11-1)from typing import List [](#%5F%5Fcodelineno-11-2) [](#%5F%5Fcodelineno-11-3) [](#%5F%5Fcodelineno-11-4)# Schema for structured output to use in planning [](#%5F%5Fcodelineno-11-5)class Section(BaseModel): [](#%5F%5Fcodelineno-11-6) name: str = Field( [](#%5F%5Fcodelineno-11-7) description="Name for this section of the report.", [](#%5F%5Fcodelineno-11-8) ) [](#%5F%5Fcodelineno-11-9) description: str = Field( [](#%5F%5Fcodelineno-11-10) description="Brief overview of the main topics and concepts to be covered in this section.", [](#%5F%5Fcodelineno-11-11) ) [](#%5F%5Fcodelineno-11-12) [](#%5F%5Fcodelineno-11-13) [](#%5F%5Fcodelineno-11-14)class Sections(BaseModel): [](#%5F%5Fcodelineno-11-15) sections: List[Section] = Field( [](#%5F%5Fcodelineno-11-16) description="Sections of the report.", [](#%5F%5Fcodelineno-11-17) ) [](#%5F%5Fcodelineno-11-18) [](#%5F%5Fcodelineno-11-19) [](#%5F%5Fcodelineno-11-20)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-11-21)planner = llm.with_structured_output(Sections) [](#%5F%5Fcodelineno-11-22) [](#%5F%5Fcodelineno-11-23) [](#%5F%5Fcodelineno-11-24)@task [](#%5F%5Fcodelineno-11-25)def orchestrator(topic: str): [](#%5F%5Fcodelineno-11-26) """Orchestrator that generates a plan for the report""" [](#%5F%5Fcodelineno-11-27) # Generate queries [](#%5F%5Fcodelineno-11-28) report_sections = planner.invoke( [](#%5F%5Fcodelineno-11-29) [ [](#%5F%5Fcodelineno-11-30) SystemMessage(content="Generate a plan for the report."), [](#%5F%5Fcodelineno-11-31) HumanMessage(content=f"Here is the report topic: {topic}"), [](#%5F%5Fcodelineno-11-32) ] [](#%5F%5Fcodelineno-11-33) ) [](#%5F%5Fcodelineno-11-34) [](#%5F%5Fcodelineno-11-35) return report_sections.sections [](#%5F%5Fcodelineno-11-36) [](#%5F%5Fcodelineno-11-37) [](#%5F%5Fcodelineno-11-38)@task [](#%5F%5Fcodelineno-11-39)def llm_call(section: Section): [](#%5F%5Fcodelineno-11-40) """Worker writes a section of the report""" [](#%5F%5Fcodelineno-11-41) [](#%5F%5Fcodelineno-11-42) # Generate section [](#%5F%5Fcodelineno-11-43) result = llm.invoke( [](#%5F%5Fcodelineno-11-44) [ [](#%5F%5Fcodelineno-11-45) SystemMessage(content="Write a report section."), [](#%5F%5Fcodelineno-11-46) HumanMessage( [](#%5F%5Fcodelineno-11-47) content=f"Here is the section name: {section.name} and description: {section.description}" [](#%5F%5Fcodelineno-11-48) ), [](#%5F%5Fcodelineno-11-49) ] [](#%5F%5Fcodelineno-11-50) ) [](#%5F%5Fcodelineno-11-51) [](#%5F%5Fcodelineno-11-52) # Write the updated section to completed sections [](#%5F%5Fcodelineno-11-53) return result.content [](#%5F%5Fcodelineno-11-54) [](#%5F%5Fcodelineno-11-55) [](#%5F%5Fcodelineno-11-56)@task [](#%5F%5Fcodelineno-11-57)def synthesizer(completed_sections: list[str]): [](#%5F%5Fcodelineno-11-58) """Synthesize full report from sections""" [](#%5F%5Fcodelineno-11-59) final_report = "\n\n---\n\n".join(completed_sections) [](#%5F%5Fcodelineno-11-60) return final_report [](#%5F%5Fcodelineno-11-61) [](#%5F%5Fcodelineno-11-62) [](#%5F%5Fcodelineno-11-63)@entrypoint() [](#%5F%5Fcodelineno-11-64)def orchestrator_worker(topic: str): [](#%5F%5Fcodelineno-11-65) sections = orchestrator(topic).result() [](#%5F%5Fcodelineno-11-66) section_futures = [llm_call(section) for section in sections] [](#%5F%5Fcodelineno-11-67) final_report = synthesizer( [](#%5F%5Fcodelineno-11-68) [section_fut.result() for section_fut in section_futures] [](#%5F%5Fcodelineno-11-69) ).result() [](#%5F%5Fcodelineno-11-70) return final_report [](#%5F%5Fcodelineno-11-71) [](#%5F%5Fcodelineno-11-72)# Invoke [](#%5F%5Fcodelineno-11-73)report = orchestrator_worker.invoke("Create a report on LLM scaling laws") [](#%5F%5Fcodelineno-11-74)from IPython.display import Markdown [](#%5F%5Fcodelineno-11-75)Markdown(report)

LangSmith Trace

https://smith.langchain.com/public/75a636d0-6179-4a12-9836-e0aa571e87c5/r

Evaluator-optimizer

In the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop:

In the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop.

When to use this workflow: This workflow is particularly effective when we have clear evaluation criteria, and when iterative refinement provides measurable value. The two signs of good fit are, first, that LLM responses can be demonstrably improved when a human articulates their feedback; and second, that the LLM can provide such feedback. This is analogous to the iterative writing process a human writer might go through when producing a polished document.

evaluator_optimizer.png

Graph APIFunctional API

[](#%5F%5Fcodelineno-12-1)# Graph state [](#%5F%5Fcodelineno-12-2)class State(TypedDict): [](#%5F%5Fcodelineno-12-3) joke: str [](#%5F%5Fcodelineno-12-4) topic: str [](#%5F%5Fcodelineno-12-5) feedback: str [](#%5F%5Fcodelineno-12-6) funny_or_not: str [](#%5F%5Fcodelineno-12-7) [](#%5F%5Fcodelineno-12-8) [](#%5F%5Fcodelineno-12-9)# Schema for structured output to use in evaluation [](#%5F%5Fcodelineno-12-10)class Feedback(BaseModel): [](#%5F%5Fcodelineno-12-11) grade: Literal["funny", "not funny"] = Field( [](#%5F%5Fcodelineno-12-12) description="Decide if the joke is funny or not.", [](#%5F%5Fcodelineno-12-13) ) [](#%5F%5Fcodelineno-12-14) feedback: str = Field( [](#%5F%5Fcodelineno-12-15) description="If the joke is not funny, provide feedback on how to improve it.", [](#%5F%5Fcodelineno-12-16) ) [](#%5F%5Fcodelineno-12-17) [](#%5F%5Fcodelineno-12-18) [](#%5F%5Fcodelineno-12-19)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-12-20)evaluator = llm.with_structured_output(Feedback) [](#%5F%5Fcodelineno-12-21) [](#%5F%5Fcodelineno-12-22) [](#%5F%5Fcodelineno-12-23)# Nodes [](#%5F%5Fcodelineno-12-24)def llm_call_generator(state: State): [](#%5F%5Fcodelineno-12-25) """LLM generates a joke""" [](#%5F%5Fcodelineno-12-26) [](#%5F%5Fcodelineno-12-27) if state.get("feedback"): [](#%5F%5Fcodelineno-12-28) msg = llm.invoke( [](#%5F%5Fcodelineno-12-29) f"Write a joke about {state['topic']} but take into account the feedback: {state['feedback']}" [](#%5F%5Fcodelineno-12-30) ) [](#%5F%5Fcodelineno-12-31) else: [](#%5F%5Fcodelineno-12-32) msg = llm.invoke(f"Write a joke about {state['topic']}") [](#%5F%5Fcodelineno-12-33) return {"joke": msg.content} [](#%5F%5Fcodelineno-12-34) [](#%5F%5Fcodelineno-12-35) [](#%5F%5Fcodelineno-12-36)def llm_call_evaluator(state: State): [](#%5F%5Fcodelineno-12-37) """LLM evaluates the joke""" [](#%5F%5Fcodelineno-12-38) [](#%5F%5Fcodelineno-12-39) grade = evaluator.invoke(f"Grade the joke {state['joke']}") [](#%5F%5Fcodelineno-12-40) return {"funny_or_not": grade.grade, "feedback": grade.feedback} [](#%5F%5Fcodelineno-12-41) [](#%5F%5Fcodelineno-12-42) [](#%5F%5Fcodelineno-12-43)# Conditional edge function to route back to joke generator or end based upon feedback from the evaluator [](#%5F%5Fcodelineno-12-44)def route_joke(state: State): [](#%5F%5Fcodelineno-12-45) """Route back to joke generator or end based upon feedback from the evaluator""" [](#%5F%5Fcodelineno-12-46) [](#%5F%5Fcodelineno-12-47) if state["funny_or_not"] == "funny": [](#%5F%5Fcodelineno-12-48) return "Accepted" [](#%5F%5Fcodelineno-12-49) elif state["funny_or_not"] == "not funny": [](#%5F%5Fcodelineno-12-50) return "Rejected + Feedback" [](#%5F%5Fcodelineno-12-51) [](#%5F%5Fcodelineno-12-52) [](#%5F%5Fcodelineno-12-53)# Build workflow [](#%5F%5Fcodelineno-12-54)optimizer_builder = StateGraph(State) [](#%5F%5Fcodelineno-12-55) [](#%5F%5Fcodelineno-12-56)# Add the nodes [](#%5F%5Fcodelineno-12-57)optimizer_builder.add_node("llm_call_generator", llm_call_generator) [](#%5F%5Fcodelineno-12-58)optimizer_builder.add_node("llm_call_evaluator", llm_call_evaluator) [](#%5F%5Fcodelineno-12-59) [](#%5F%5Fcodelineno-12-60)# Add edges to connect nodes [](#%5F%5Fcodelineno-12-61)optimizer_builder.add_edge(START, "llm_call_generator") [](#%5F%5Fcodelineno-12-62)optimizer_builder.add_edge("llm_call_generator", "llm_call_evaluator") [](#%5F%5Fcodelineno-12-63)optimizer_builder.add_conditional_edges( [](#%5F%5Fcodelineno-12-64) "llm_call_evaluator", [](#%5F%5Fcodelineno-12-65) route_joke, [](#%5F%5Fcodelineno-12-66) { # Name returned by route_joke : Name of next node to visit [](#%5F%5Fcodelineno-12-67) "Accepted": END, [](#%5F%5Fcodelineno-12-68) "Rejected + Feedback": "llm_call_generator", [](#%5F%5Fcodelineno-12-69) }, [](#%5F%5Fcodelineno-12-70)) [](#%5F%5Fcodelineno-12-71) [](#%5F%5Fcodelineno-12-72)# Compile the workflow [](#%5F%5Fcodelineno-12-73)optimizer_workflow = optimizer_builder.compile() [](#%5F%5Fcodelineno-12-74) [](#%5F%5Fcodelineno-12-75)# Show the workflow [](#%5F%5Fcodelineno-12-76)display(Image(optimizer_workflow.get_graph().draw_mermaid_png())) [](#%5F%5Fcodelineno-12-77) [](#%5F%5Fcodelineno-12-78)# Invoke [](#%5F%5Fcodelineno-12-79)state = optimizer_workflow.invoke({"topic": "Cats"}) [](#%5F%5Fcodelineno-12-80)print(state["joke"])

LangSmith Trace

https://smith.langchain.com/public/86ab3e60-2000-4bff-b988-9b89a3269789/r

Resources:

Examples

Here is an assistant that uses evaluator-optimizer to improve a report. See our video here.

Here is a RAG workflow that grades answers for hallucinations or errors. See our video here.

[](#%5F%5Fcodelineno-13-1)# Schema for structured output to use in evaluation [](#%5F%5Fcodelineno-13-2)class Feedback(BaseModel): [](#%5F%5Fcodelineno-13-3) grade: Literal["funny", "not funny"] = Field( [](#%5F%5Fcodelineno-13-4) description="Decide if the joke is funny or not.", [](#%5F%5Fcodelineno-13-5) ) [](#%5F%5Fcodelineno-13-6) feedback: str = Field( [](#%5F%5Fcodelineno-13-7) description="If the joke is not funny, provide feedback on how to improve it.", [](#%5F%5Fcodelineno-13-8) ) [](#%5F%5Fcodelineno-13-9) [](#%5F%5Fcodelineno-13-10) [](#%5F%5Fcodelineno-13-11)# Augment the LLM with schema for structured output [](#%5F%5Fcodelineno-13-12)evaluator = llm.with_structured_output(Feedback) [](#%5F%5Fcodelineno-13-13) [](#%5F%5Fcodelineno-13-14) [](#%5F%5Fcodelineno-13-15)# Nodes [](#%5F%5Fcodelineno-13-16)@task [](#%5F%5Fcodelineno-13-17)def llm_call_generator(topic: str, feedback: Feedback): [](#%5F%5Fcodelineno-13-18) """LLM generates a joke""" [](#%5F%5Fcodelineno-13-19) if feedback: [](#%5F%5Fcodelineno-13-20) msg = llm.invoke( [](#%5F%5Fcodelineno-13-21) f"Write a joke about {topic} but take into account the feedback: {feedback}" [](#%5F%5Fcodelineno-13-22) ) [](#%5F%5Fcodelineno-13-23) else: [](#%5F%5Fcodelineno-13-24) msg = llm.invoke(f"Write a joke about {topic}") [](#%5F%5Fcodelineno-13-25) return msg.content [](#%5F%5Fcodelineno-13-26) [](#%5F%5Fcodelineno-13-27) [](#%5F%5Fcodelineno-13-28)@task [](#%5F%5Fcodelineno-13-29)def llm_call_evaluator(joke: str): [](#%5F%5Fcodelineno-13-30) """LLM evaluates the joke""" [](#%5F%5Fcodelineno-13-31) feedback = evaluator.invoke(f"Grade the joke {joke}") [](#%5F%5Fcodelineno-13-32) return feedback [](#%5F%5Fcodelineno-13-33) [](#%5F%5Fcodelineno-13-34) [](#%5F%5Fcodelineno-13-35)@entrypoint() [](#%5F%5Fcodelineno-13-36)def optimizer_workflow(topic: str): [](#%5F%5Fcodelineno-13-37) feedback = None [](#%5F%5Fcodelineno-13-38) while True: [](#%5F%5Fcodelineno-13-39) joke = llm_call_generator(topic, feedback).result() [](#%5F%5Fcodelineno-13-40) feedback = llm_call_evaluator(joke).result() [](#%5F%5Fcodelineno-13-41) if feedback.grade == "funny": [](#%5F%5Fcodelineno-13-42) break [](#%5F%5Fcodelineno-13-43) [](#%5F%5Fcodelineno-13-44) return joke [](#%5F%5Fcodelineno-13-45) [](#%5F%5Fcodelineno-13-46)# Invoke [](#%5F%5Fcodelineno-13-47)for step in optimizer_workflow.stream("Cats", stream_mode="updates"): [](#%5F%5Fcodelineno-13-48) print(step) [](#%5F%5Fcodelineno-13-49) print("\n")

LangSmith Trace

https://smith.langchain.com/public/f66830be-4339-4a6b-8a93-389ce5ae27b4/r

Agent

Agents are typically implemented as an LLM performing actions (via tool-calling) based on environmental feedback in a loop. As noted in the Anthropic blog on Building Effective Agents:

Agents can handle sophisticated tasks, but their implementation is often straightforward. They are typically just LLMs using tools based on environmental feedback in a loop. It is therefore crucial to design toolsets and their documentation clearly and thoughtfully.

When to use agents: Agents can be used for open-ended problems where it’s difficult or impossible to predict the required number of steps, and where you can’t hardcode a fixed path. The LLM will potentially operate for many turns, and you must have some level of trust in its decision-making. Agents' autonomy makes them ideal for scaling tasks in trusted environments.

agent.png

API Reference: tool

[](#%5F%5Fcodelineno-14-1)from langchain_core.tools import tool [](#%5F%5Fcodelineno-14-2) [](#%5F%5Fcodelineno-14-3) [](#%5F%5Fcodelineno-14-4)# Define tools [](#%5F%5Fcodelineno-14-5)@tool [](#%5F%5Fcodelineno-14-6)def multiply(a: int, b: int) -> int: [](#%5F%5Fcodelineno-14-7) """Multiply a and b. [](#%5F%5Fcodelineno-14-8) [](#%5F%5Fcodelineno-14-9) Args: [](#%5F%5Fcodelineno-14-10) a: first int [](#%5F%5Fcodelineno-14-11) b: second int [](#%5F%5Fcodelineno-14-12) """ [](#%5F%5Fcodelineno-14-13) return a * b [](#%5F%5Fcodelineno-14-14) [](#%5F%5Fcodelineno-14-15) [](#%5F%5Fcodelineno-14-16)@tool [](#%5F%5Fcodelineno-14-17)def add(a: int, b: int) -> int: [](#%5F%5Fcodelineno-14-18) """Adds a and b. [](#%5F%5Fcodelineno-14-19) [](#%5F%5Fcodelineno-14-20) Args: [](#%5F%5Fcodelineno-14-21) a: first int [](#%5F%5Fcodelineno-14-22) b: second int [](#%5F%5Fcodelineno-14-23) """ [](#%5F%5Fcodelineno-14-24) return a + b [](#%5F%5Fcodelineno-14-25) [](#%5F%5Fcodelineno-14-26) [](#%5F%5Fcodelineno-14-27)@tool [](#%5F%5Fcodelineno-14-28)def divide(a: int, b: int) -> float: [](#%5F%5Fcodelineno-14-29) """Divide a and b. [](#%5F%5Fcodelineno-14-30) [](#%5F%5Fcodelineno-14-31) Args: [](#%5F%5Fcodelineno-14-32) a: first int [](#%5F%5Fcodelineno-14-33) b: second int [](#%5F%5Fcodelineno-14-34) """ [](#%5F%5Fcodelineno-14-35) return a / b [](#%5F%5Fcodelineno-14-36) [](#%5F%5Fcodelineno-14-37) [](#%5F%5Fcodelineno-14-38)# Augment the LLM with tools [](#%5F%5Fcodelineno-14-39)tools = [add, multiply, divide] [](#%5F%5Fcodelineno-14-40)tools_by_name = {tool.name: tool for tool in tools} [](#%5F%5Fcodelineno-14-41)llm_with_tools = llm.bind_tools(tools)

Graph APIFunctional API

[](#%5F%5Fcodelineno-15-1)from langgraph.graph import MessagesState [](#%5F%5Fcodelineno-15-2)from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage [](#%5F%5Fcodelineno-15-3) [](#%5F%5Fcodelineno-15-4) [](#%5F%5Fcodelineno-15-5)# Nodes [](#%5F%5Fcodelineno-15-6)def llm_call(state: MessagesState): [](#%5F%5Fcodelineno-15-7) """LLM decides whether to call a tool or not""" [](#%5F%5Fcodelineno-15-8) [](#%5F%5Fcodelineno-15-9) return { [](#%5F%5Fcodelineno-15-10) "messages": [ [](#%5F%5Fcodelineno-15-11) llm_with_tools.invoke( [](#%5F%5Fcodelineno-15-12) [ [](#%5F%5Fcodelineno-15-13) SystemMessage( [](#%5F%5Fcodelineno-15-14) content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." [](#%5F%5Fcodelineno-15-15) ) [](#%5F%5Fcodelineno-15-16) ] [](#%5F%5Fcodelineno-15-17) + state["messages"] [](#%5F%5Fcodelineno-15-18) ) [](#%5F%5Fcodelineno-15-19) ] [](#%5F%5Fcodelineno-15-20) } [](#%5F%5Fcodelineno-15-21) [](#%5F%5Fcodelineno-15-22) [](#%5F%5Fcodelineno-15-23)def tool_node(state: dict): [](#%5F%5Fcodelineno-15-24) """Performs the tool call""" [](#%5F%5Fcodelineno-15-25) [](#%5F%5Fcodelineno-15-26) result = [] [](#%5F%5Fcodelineno-15-27) for tool_call in state["messages"][-1].tool_calls: [](#%5F%5Fcodelineno-15-28) tool = tools_by_name[tool_call["name"]] [](#%5F%5Fcodelineno-15-29) observation = tool.invoke(tool_call["args"]) [](#%5F%5Fcodelineno-15-30) result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"])) [](#%5F%5Fcodelineno-15-31) return {"messages": result} [](#%5F%5Fcodelineno-15-32) [](#%5F%5Fcodelineno-15-33) [](#%5F%5Fcodelineno-15-34)# Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call [](#%5F%5Fcodelineno-15-35)def should_continue(state: MessagesState) -> Literal["environment", END]: [](#%5F%5Fcodelineno-15-36) """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" [](#%5F%5Fcodelineno-15-37) [](#%5F%5Fcodelineno-15-38) messages = state["messages"] [](#%5F%5Fcodelineno-15-39) last_message = messages[-1] [](#%5F%5Fcodelineno-15-40) # If the LLM makes a tool call, then perform an action [](#%5F%5Fcodelineno-15-41) if last_message.tool_calls: [](#%5F%5Fcodelineno-15-42) return "Action" [](#%5F%5Fcodelineno-15-43) # Otherwise, we stop (reply to the user) [](#%5F%5Fcodelineno-15-44) return END [](#%5F%5Fcodelineno-15-45) [](#%5F%5Fcodelineno-15-46) [](#%5F%5Fcodelineno-15-47)# Build workflow [](#%5F%5Fcodelineno-15-48)agent_builder = StateGraph(MessagesState) [](#%5F%5Fcodelineno-15-49) [](#%5F%5Fcodelineno-15-50)# Add nodes [](#%5F%5Fcodelineno-15-51)agent_builder.add_node("llm_call", llm_call) [](#%5F%5Fcodelineno-15-52)agent_builder.add_node("environment", tool_node) [](#%5F%5Fcodelineno-15-53) [](#%5F%5Fcodelineno-15-54)# Add edges to connect nodes [](#%5F%5Fcodelineno-15-55)agent_builder.add_edge(START, "llm_call") [](#%5F%5Fcodelineno-15-56)agent_builder.add_conditional_edges( [](#%5F%5Fcodelineno-15-57) "llm_call", [](#%5F%5Fcodelineno-15-58) should_continue, [](#%5F%5Fcodelineno-15-59) { [](#%5F%5Fcodelineno-15-60) # Name returned by should_continue : Name of next node to visit [](#%5F%5Fcodelineno-15-61) "Action": "environment", [](#%5F%5Fcodelineno-15-62) END: END, [](#%5F%5Fcodelineno-15-63) }, [](#%5F%5Fcodelineno-15-64)) [](#%5F%5Fcodelineno-15-65)agent_builder.add_edge("environment", "llm_call") [](#%5F%5Fcodelineno-15-66) [](#%5F%5Fcodelineno-15-67)# Compile the agent [](#%5F%5Fcodelineno-15-68)agent = agent_builder.compile() [](#%5F%5Fcodelineno-15-69) [](#%5F%5Fcodelineno-15-70)# Show the agent [](#%5F%5Fcodelineno-15-71)display(Image(agent.get_graph(xray=True).draw_mermaid_png())) [](#%5F%5Fcodelineno-15-72) [](#%5F%5Fcodelineno-15-73)# Invoke [](#%5F%5Fcodelineno-15-74)messages = [HumanMessage(content="Add 3 and 4.")] [](#%5F%5Fcodelineno-15-75)messages = agent.invoke({"messages": messages}) [](#%5F%5Fcodelineno-15-76)for m in messages["messages"]: [](#%5F%5Fcodelineno-15-77) m.pretty_print()

LangSmith Trace

https://smith.langchain.com/public/051f0391-6761-4f8c-a53b-22231b016690/r

Resources:

LangChain Academy

See our lesson on agents here.

Examples

Here is a project that uses a tool calling agent to create / store long-term memories.

[](#%5F%5Fcodelineno-16-1)from langgraph.graph import add_messages [](#%5F%5Fcodelineno-16-2)from langchain_core.messages import ( [](#%5F%5Fcodelineno-16-3) SystemMessage, [](#%5F%5Fcodelineno-16-4) HumanMessage, [](#%5F%5Fcodelineno-16-5) BaseMessage, [](#%5F%5Fcodelineno-16-6) ToolCall, [](#%5F%5Fcodelineno-16-7)) [](#%5F%5Fcodelineno-16-8) [](#%5F%5Fcodelineno-16-9) [](#%5F%5Fcodelineno-16-10)@task [](#%5F%5Fcodelineno-16-11)def call_llm(messages: list[BaseMessage]): [](#%5F%5Fcodelineno-16-12) """LLM decides whether to call a tool or not""" [](#%5F%5Fcodelineno-16-13) return llm_with_tools.invoke( [](#%5F%5Fcodelineno-16-14) [ [](#%5F%5Fcodelineno-16-15) SystemMessage( [](#%5F%5Fcodelineno-16-16) content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." [](#%5F%5Fcodelineno-16-17) ) [](#%5F%5Fcodelineno-16-18) ] [](#%5F%5Fcodelineno-16-19) + messages [](#%5F%5Fcodelineno-16-20) ) [](#%5F%5Fcodelineno-16-21) [](#%5F%5Fcodelineno-16-22) [](#%5F%5Fcodelineno-16-23)@task [](#%5F%5Fcodelineno-16-24)def call_tool(tool_call: ToolCall): [](#%5F%5Fcodelineno-16-25) """Performs the tool call""" [](#%5F%5Fcodelineno-16-26) tool = tools_by_name[tool_call["name"]] [](#%5F%5Fcodelineno-16-27) return tool.invoke(tool_call) [](#%5F%5Fcodelineno-16-28) [](#%5F%5Fcodelineno-16-29) [](#%5F%5Fcodelineno-16-30)@entrypoint() [](#%5F%5Fcodelineno-16-31)def agent(messages: list[BaseMessage]): [](#%5F%5Fcodelineno-16-32) llm_response = call_llm(messages).result() [](#%5F%5Fcodelineno-16-33) [](#%5F%5Fcodelineno-16-34) while True: [](#%5F%5Fcodelineno-16-35) if not llm_response.tool_calls: [](#%5F%5Fcodelineno-16-36) break [](#%5F%5Fcodelineno-16-37) [](#%5F%5Fcodelineno-16-38) # Execute tools [](#%5F%5Fcodelineno-16-39) tool_result_futures = [ [](#%5F%5Fcodelineno-16-40) call_tool(tool_call) for tool_call in llm_response.tool_calls [](#%5F%5Fcodelineno-16-41) ] [](#%5F%5Fcodelineno-16-42) tool_results = [fut.result() for fut in tool_result_futures] [](#%5F%5Fcodelineno-16-43) messages = add_messages(messages, [llm_response, *tool_results]) [](#%5F%5Fcodelineno-16-44) llm_response = call_llm(messages).result() [](#%5F%5Fcodelineno-16-45) [](#%5F%5Fcodelineno-16-46) messages = add_messages(messages, llm_response) [](#%5F%5Fcodelineno-16-47) return messages [](#%5F%5Fcodelineno-16-48) [](#%5F%5Fcodelineno-16-49)# Invoke [](#%5F%5Fcodelineno-16-50)messages = [HumanMessage(content="Add 3 and 4.")] [](#%5F%5Fcodelineno-16-51)for chunk in agent.stream(messages, stream_mode="updates"): [](#%5F%5Fcodelineno-16-52) print(chunk) [](#%5F%5Fcodelineno-16-53) print("\n")

LangSmith Trace

https://smith.langchain.com/public/42ae8bf9-3935-4504-a081-8ddbcbfc8b2e/r

Pre-built

LangGraph also provides a pre-built method for creating an agent as defined above (using the [create_react_agent](../../reference/agents/#langgraph.prebuilt.chat%5Fagent%5Fexecutor.create%5Freact%5Fagent " create_react_agent") function):

https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/

API Reference: create_react_agent

[](#%5F%5Fcodelineno-17-1)from langgraph.prebuilt import create_react_agent [](#%5F%5Fcodelineno-17-2) [](#%5F%5Fcodelineno-17-3)# Pass in: [](#%5F%5Fcodelineno-17-4)# (1) the augmented LLM with tools [](#%5F%5Fcodelineno-17-5)# (2) the tools list (which is used to create the tool node) [](#%5F%5Fcodelineno-17-6)pre_built_agent = create_react_agent(llm, tools=tools) [](#%5F%5Fcodelineno-17-7) [](#%5F%5Fcodelineno-17-8)# Show the agent [](#%5F%5Fcodelineno-17-9)display(Image(pre_built_agent.get_graph().draw_mermaid_png())) [](#%5F%5Fcodelineno-17-10) [](#%5F%5Fcodelineno-17-11)# Invoke [](#%5F%5Fcodelineno-17-12)messages = [HumanMessage(content="Add 3 and 4.")] [](#%5F%5Fcodelineno-17-13)messages = pre_built_agent.invoke({"messages": messages}) [](#%5F%5Fcodelineno-17-14)for m in messages["messages"]: [](#%5F%5Fcodelineno-17-15) m.pretty_print()

LangSmith Trace

https://smith.langchain.com/public/abab6a44-29f6-4b97-8164-af77413e494d/r

What LangGraph provides

By constructing each of the above in LangGraph, we get a few things:

Persistence: Human-in-the-Loop

LangGraph persistence layer supports interruption and approval of actions (e.g., Human In The Loop). See Module 3 of LangChain Academy.

Persistence: Memory

LangGraph persistence layer supports conversational (short-term) memory and long-term memory. See Modules 2 and 5 of LangChain Academy:

Streaming

LangGraph provides several ways to stream workflow / agent outputs or intermediate state. See Module 3 of LangChain Academy.

Deployment

LangGraph provides an easy on-ramp for deployment, observability, and evaluation. See module 6 of LangChain Academy.