AIMessage β π¦π LangChain documentation (original) (raw)
class langchain_core.messages.ai.AIMessage[source]#
Bases: BaseMessage
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both the raw output as returned by the model together standardized fields (e.g., tool calls, usage metadata) added by the LangChain framework.
Pass in content as positional arg.
Parameters:
- content β The content of the message.
- kwargs β Additional arguments to pass to the parent class.
param additional_kwargs_: dict_ [Optional]#
Reserved for additional payload data associated with the message.
For example, for a message from an AI, this could include tool calls as encoded by the model provider.
param content_: str | list[str | dict]_ [Required]#
The string contents of the message.
param example_: bool_ = False#
Use to denote that a message is part of an example conversation.
At the moment, this is ignored by most models. Usage is discouraged.
param id_: str | None_ = None#
An optional unique identifier for the message. This should ideally be provided by the provider/model which created the message.
Constraints:
- coerce_numbers_to_str = True
param invalid_tool_calls_: list[InvalidToolCall]_ = []#
If provided, tool calls with parsing errors associated with the message.
param name_: str | None_ = None#
An optional name for the message.
This can be used to provide a human-readable name for the message.
Usage of this field is optional, and whether itβs used or not is up to the model implementation.
param response_metadata_: dict_ [Optional]#
Response metadata. For example: response headers, logprobs, token counts.
param tool_calls_: list[ToolCall]_ = []#
If provided, tool calls associated with the message.
param type_: Literal['ai']_ = 'ai'#
The type of the message (used for deserialization). Defaults to βaiβ.
param usage_metadata_: UsageMetadata | None_ = None#
If provided, usage metadata for a message, such as token counts.
This is a standard representation of token usage that is consistent across models.
pretty_print() β None#
Print a pretty representation of the message.
Return type:
None
pretty_repr(html: bool = False) β str[source]#
Return a pretty representation of the message.
Parameters:
html (bool) β Whether to return an HTML-formatted string. Defaults to False.
Returns:
A pretty representation of the message.
Return type:
str
text() β str#
Get the text content of the message.
Returns:
The text content of the message.
Return type:
str
Examples using AIMessage
- Build a Chatbot
- Build an Agent with AgentExecutor (Legacy)
- Chat Bot Feedback Template
- ChatGLM
- ChatOCIGenAI
- ChatOllama
- Conversational RAG
- Google Cloud Vertex AI
- Google Imagen
- How to add a human-in-the-loop for tools
- How to add a semantic layer over graph database
- How to add chat history
- How to add examples to the prompt for query analysis
- How to add retrieval to chatbots
- How to add tools to chatbots
- How to compose prompts together
- How to create a custom Output Parser
- How to create a custom chat model class
- How to do tool/function calling
- How to filter messages
- How to handle tool errors
- How to merge consecutive messages of the same type
- How to return structured data from a model
- How to trim messages
- How to use few-shot prompting with tool calling
- How to use prompting alone (no tool calling) to do extraction
- How to use reference examples when doing extraction
- Twitter (via Apify)
- Yuan2.0
- ZHIPU AI
- Zep Cloud
- Zep Cloud Memory
- Zep Open Source
- Zep Open Source Memory
- ZepCloudChatMessageHistory