trim_messages — 🦜🔗 LangChain documentation (original) (raw)

langchain_core.messages.utils.trim_messages(

messages: Sequence[MessageLikeRepresentation] | None = None,

**kwargs: Any,

) → list[BaseMessage] | Runnable[Sequence[MessageLikeRepresentation], list[BaseMessage]][source]#

Trim messages to be below a token count.

trim_messages can be used to reduce the size of a chat history to a specified token count or specified message count.

In either case, if passing the trimmed chat history back into a chat model directly, the resulting chat history should usually satisfy the following properties:

  1. The resulting chat history should be valid. Most chat models expect that chat history starts with either (1) a HumanMessage or (2) a SystemMessage followed by a HumanMessage. To achieve this, set start_on=”human”. In addition, generally a ToolMessage can only appear after an AIMessagethat involved a tool call. Please see the following link for more information about messages:https://python.langchain.com/docs/concepts/#messages
  2. It includes recent messages and drops old messages in the chat history. To achieve this set the strategy=”last”.
  3. Usually, the new chat history should include the SystemMessage if it was present in the original chat history since the SystemMessage includes special instructions to the chat model. The SystemMessage is almost always the first message in the history if present. To achieve this set theinclude_system=True.

Note The examples below show how to configure trim_messages to achieve

a behavior consistent with the above properties.

Parameters:

Returns:

list of trimmed BaseMessages.

Raises:

ValueError – if two incompatible arguments are specified or an unrecognizedstrategy is specified.

Return type:

Union[list[BaseMessage], Runnable[Sequence[MessageLikeRepresentation], list[BaseMessage]]]

Example

Trim chat history based on token count, keeping the SystemMessage if present, and ensuring that the chat history starts with a HumanMessage ( or a SystemMessage followed by a HumanMessage).

from langchain_core.messages import ( AIMessage, HumanMessage, BaseMessage, SystemMessage, trim_messages, )

messages = [ SystemMessage("you're a good assistant, you always respond with a joke."), HumanMessage("i wonder why it's called langchain"), AIMessage( 'Well, I guess they thought "WordRope" and "SentenceString" just didn't have the same ring to it!' ), HumanMessage("and who is harrison chasing anyways"), AIMessage( "Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!" ), HumanMessage("what do you call a speechless parrot"), ]

trim_messages( messages, max_tokens=45, strategy="last", token_counter=ChatOpenAI(model="gpt-4o"), # Most chat models expect that chat history starts with either: # (1) a HumanMessage or # (2) a SystemMessage followed by a HumanMessage start_on="human", # Usually, we want to keep the SystemMessage # if it's present in the original history. # The SystemMessage has special instructions for the model. include_system=True, allow_partial=False, )

[ SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='what do you call a speechless parrot'), ]

Trim chat history based on the message count, keeping the SystemMessage if present, and ensuring that the chat history starts with a HumanMessage ( or a SystemMessage followed by a HumanMessage).

trim_messages(

messages, # When len is passed in as the token counter function, # max_tokens will count the number of messages in the chat history. max_tokens=4, strategy=”last”, # Passing in len as a token counter function will # count the number of messages in the chat history. token_counter=len, # Most chat models expect that chat history starts with either: # (1) a HumanMessage or # (2) a SystemMessage followed by a HumanMessage start_on=”human”, # Usually, we want to keep the SystemMessage # if it’s present in the original history. # The SystemMessage has special instructions for the model. include_system=True, allow_partial=False,

)

[ SystemMessage(content="you're a good assistant, you always respond with a joke."), HumanMessage(content='and who is harrison chasing anyways'), AIMessage(content="Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage(content='what do you call a speechless parrot'), ]

Trim chat history using a custom token counter function that counts the number of tokens in each message.

messages = [ SystemMessage("This is a 4 token text. The full message is 10 tokens."), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"), AIMessage( [ {"type": "text", "text": "This is the FIRST 4 token block."}, {"type": "text", "text": "This is the SECOND 4 token block."}, ], id="second", ), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"), AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"), ]

def dummy_token_counter(messages: list[BaseMessage]) -> int: # treat each message like it adds 3 default tokens at the beginning # of the message and at the end of the message. 3 + 4 + 3 = 10 tokens # per message.

default_content_len = 4
default_msg_prefix_len = 3
default_msg_suffix_len = 3

count = 0
for msg in messages:
    if isinstance(msg.content, str):
        count += default_msg_prefix_len + default_content_len + default_msg_suffix_len
    if isinstance(msg.content, list):
        count += default_msg_prefix_len + len(msg.content) *  default_content_len + default_msg_suffix_len
return count

First 30 tokens, allowing partial messages:

trim_messages( messages, max_tokens=30, token_counter=dummy_token_counter, strategy="first", allow_partial=True, )

[ SystemMessage("This is a 4 token text. The full message is 10 tokens."), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"), AIMessage( [{"type": "text", "text": "This is the FIRST 4 token block."}], id="second"), ]

Examples using trim_messages