Models | LangChain Reference (original) (raw)
Initialize a chat model from any supported provider using a unified interface.
Two main use cases:
- Fixed model – specify the model upfront and get a ready-to-use chat model.
- Configurable model – choose to specify parameters (including model name) at runtime via
config. Makes it easy to switch between models/providers without changing your code
Note
Requires the integration package for the chosen model provider to be installed.
See the model_provider parameter below for specific package names (e.g., pip install langchain-openai).
Refer to the provider integration's API referencefor supported model parameters to use as **kwargs.
| PARAMETER | DESCRIPTION | |||
|---|---|---|---|---|
| model ¶ | The name or ID of the model, e.g. 'o3-mini', 'claude-sonnet-4-5-20250929'. You can also specify model and model provider in a single argument using'{model_provider}:{model}' format, e.g. 'openai:o1'. Will attempt to infer model_provider from model if not specified. The following providers will be inferred based on these model prefixes: gpt-... | o1... | o3... -> openai claude... -> anthropic amazon... -> bedrock gemini... -> google_vertexai command... -> cohere accounts/fireworks... -> fireworks mistral... -> mistralai deepseek... -> deepseek grok... -> xai sonar... -> perplexity solar... -> upstage TYPE: str | None DEFAULT: None | |
| model_provider ¶ | The model provider if not specified as part of the model arg (see above). Supported model_provider values and the corresponding integration package are: openai -> langchain-openai anthropic -> langchain-anthropic azure_openai -> langchain-openai azure_ai -> langchain-azure-ai google_vertexai -> langchain-google-vertexai google_genai -> langchain-google-genai bedrock -> langchain-aws bedrock_converse -> langchain-aws cohere -> langchain-cohere fireworks -> langchain-fireworks together -> langchain-together mistralai -> langchain-mistralai huggingface -> langchain-huggingface groq -> langchain-groq ollama -> langchain-ollama google_anthropic_vertex -> langchain-google-vertexai deepseek -> langchain-deepseek ibm -> langchain-ibm nvidia -> langchain-nvidia-ai-endpoints xai -> langchain-xai perplexity -> langchain-perplexity upstage -> langchain-upstage TYPE: str | None DEFAULT: None | |||
| configurable_fields ¶ | Which model parameters are configurable at runtime: None: No configurable fields (i.e., a fixed model). 'any': All fields are configurable. See security note below. list[str] | Tuple[str, ...]: Specified fields are configurable. Fields are assumed to have config_prefix stripped if a config_prefix is specified. If model is specified, then defaults to None. If model is not specified, then defaults to ("model", "model_provider"). Security note Setting configurable_fields="any" means fields like api_key,base_url, etc., can be altered at runtime, potentially redirecting model requests to a different service/user. Make sure that if you're accepting untrusted configurations that you enumerate the configurable_fields=(...) explicitly. TYPE: Literal['any'] | list[str] | tuple[str, ...] | None DEFAULT: None |
| config_prefix ¶ | Optional prefix for configuration keys. Useful when you have multiple configurable models in the same application. If 'config_prefix' is a non-empty string then model will be configurable at runtime via the config["configurable"]["{config_prefix}_{param}"] keys. See examples below. If 'config_prefix' is an empty string then model will be configurable viaconfig["configurable"]["{param}"]. TYPE: str | None DEFAULT: None | |||
| **kwargs ¶ | Additional model-specific keyword args to pass to the underlying chat model's __init__ method. Common parameters include: temperature: Model temperature for controlling randomness. max_tokens: Maximum number of output tokens. timeout: Maximum time (in seconds) to wait for a response. max_retries: Maximum number of retry attempts for failed requests. base_url: Custom API endpoint URL. rate_limiter: A[BaseRateLimiter](../../langchain%5Fcore/rate%5Flimiters/#langchain%5Fcore.rate%5Flimiters.BaseRateLimiter " BaseRateLimiter") instance to control request rate. Refer to the specific model provider'sintegration referencefor all available parameters. TYPE: Any DEFAULT: {} |
| RETURNS | DESCRIPTION |
|---|---|
[BaseChatModel](../../langchain%5Fcore/language%5Fmodels/#langchain%5Fcore.language%5Fmodels.BaseChatModel " BaseChatModel (langchain_core.language_models.BaseChatModel)") | _ConfigurableModel |
A BaseChatModel corresponding to the model_name and model_providerspecified if configurability is inferred to be False. If configurable, a chat model emulator that initializes the underlying model at runtime once a config is passed in. |
| RAISES | DESCRIPTION |
|---|---|
| ValueError | If model_provider cannot be inferred or isn't supported. |
| ImportError | If the model provider integration package is not installed. |
Initialize a non-configurable model
[](#%5F%5Fcodelineno-0-1)# pip install langchain langchain-openai langchain-anthropic langchain-google-vertexai [](#%5F%5Fcodelineno-0-2) [](#%5F%5Fcodelineno-0-3)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-0-4) [](#%5F%5Fcodelineno-0-5)o3_mini = init_chat_model("openai:o3-mini", temperature=0) [](#%5F%5Fcodelineno-0-6)claude_sonnet = init_chat_model("anthropic:claude-sonnet-4-5-20250929", temperature=0) [](#%5F%5Fcodelineno-0-7)gemini_2-5_flash = init_chat_model("google_vertexai:gemini-2.5-flash", temperature=0) [](#%5F%5Fcodelineno-0-8) [](#%5F%5Fcodelineno-0-9)o3_mini.invoke("what's your name") [](#%5F%5Fcodelineno-0-10)claude_sonnet.invoke("what's your name") [](#%5F%5Fcodelineno-0-11)gemini_2-5_flash.invoke("what's your name")
Partially configurable model with no default
[](#%5F%5Fcodelineno-1-1)# pip install langchain langchain-openai langchain-anthropic [](#%5F%5Fcodelineno-1-2) [](#%5F%5Fcodelineno-1-3)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-1-4) [](#%5F%5Fcodelineno-1-5)# (We don't need to specify configurable=True if a model isn't specified.) [](#%5F%5Fcodelineno-1-6)configurable_model = init_chat_model(temperature=0) [](#%5F%5Fcodelineno-1-7) [](#%5F%5Fcodelineno-1-8)configurable_model.invoke("what's your name", config={"configurable": {"model": "gpt-4o"}}) [](#%5F%5Fcodelineno-1-9)# Use GPT-4o to generate the response [](#%5F%5Fcodelineno-1-10) [](#%5F%5Fcodelineno-1-11)configurable_model.invoke( [](#%5F%5Fcodelineno-1-12) "what's your name", [](#%5F%5Fcodelineno-1-13) config={"configurable": {"model": "claude-sonnet-4-5-20250929"}}, [](#%5F%5Fcodelineno-1-14))
Fully configurable model with a default
[](#%5F%5Fcodelineno-2-1)# pip install langchain langchain-openai langchain-anthropic [](#%5F%5Fcodelineno-2-2) [](#%5F%5Fcodelineno-2-3)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-2-4) [](#%5F%5Fcodelineno-2-5)configurable_model_with_default = init_chat_model( [](#%5F%5Fcodelineno-2-6) "openai:gpt-4o", [](#%5F%5Fcodelineno-2-7) configurable_fields="any", # This allows us to configure other params like temperature, max_tokens, etc at runtime. [](#%5F%5Fcodelineno-2-8) config_prefix="foo", [](#%5F%5Fcodelineno-2-9) temperature=0, [](#%5F%5Fcodelineno-2-10)) [](#%5F%5Fcodelineno-2-11) [](#%5F%5Fcodelineno-2-12)configurable_model_with_default.invoke("what's your name") [](#%5F%5Fcodelineno-2-13)# GPT-4o response with temperature 0 (as set in default) [](#%5F%5Fcodelineno-2-14) [](#%5F%5Fcodelineno-2-15)configurable_model_with_default.invoke( [](#%5F%5Fcodelineno-2-16) "what's your name", [](#%5F%5Fcodelineno-2-17) config={ [](#%5F%5Fcodelineno-2-18) "configurable": { [](#%5F%5Fcodelineno-2-19) "foo_model": "anthropic:claude-sonnet-4-5-20250929", [](#%5F%5Fcodelineno-2-20) "foo_temperature": 0.6, [](#%5F%5Fcodelineno-2-21) } [](#%5F%5Fcodelineno-2-22) }, [](#%5F%5Fcodelineno-2-23)) [](#%5F%5Fcodelineno-2-24)# Override default to use Sonnet 4.5 with temperature 0.6 to generate response
Bind tools to a configurable model
You can call any chat model declarative methods on a configurable model in the same way that you would with a normal model:
[](#%5F%5Fcodelineno-3-1)# pip install langchain langchain-openai langchain-anthropic [](#%5F%5Fcodelineno-3-2) [](#%5F%5Fcodelineno-3-3)from langchain.chat_models import init_chat_model [](#%5F%5Fcodelineno-3-4)from pydantic import BaseModel, Field [](#%5F%5Fcodelineno-3-5) [](#%5F%5Fcodelineno-3-6) [](#%5F%5Fcodelineno-3-7)class GetWeather(BaseModel): [](#%5F%5Fcodelineno-3-8) '''Get the current weather in a given location''' [](#%5F%5Fcodelineno-3-9) [](#%5F%5Fcodelineno-3-10) location: str = Field(..., description="The city and state, e.g. San Francisco, CA") [](#%5F%5Fcodelineno-3-11) [](#%5F%5Fcodelineno-3-12) [](#%5F%5Fcodelineno-3-13)class GetPopulation(BaseModel): [](#%5F%5Fcodelineno-3-14) '''Get the current population in a given location''' [](#%5F%5Fcodelineno-3-15) [](#%5F%5Fcodelineno-3-16) location: str = Field(..., description="The city and state, e.g. San Francisco, CA") [](#%5F%5Fcodelineno-3-17) [](#%5F%5Fcodelineno-3-18) [](#%5F%5Fcodelineno-3-19)configurable_model = init_chat_model( [](#%5F%5Fcodelineno-3-20) "gpt-4o", configurable_fields=("model", "model_provider"), temperature=0 [](#%5F%5Fcodelineno-3-21)) [](#%5F%5Fcodelineno-3-22) [](#%5F%5Fcodelineno-3-23)configurable_model_with_tools = configurable_model.bind_tools( [](#%5F%5Fcodelineno-3-24) [ [](#%5F%5Fcodelineno-3-25) GetWeather, [](#%5F%5Fcodelineno-3-26) GetPopulation, [](#%5F%5Fcodelineno-3-27) ] [](#%5F%5Fcodelineno-3-28)) [](#%5F%5Fcodelineno-3-29)configurable_model_with_tools.invoke( [](#%5F%5Fcodelineno-3-30) "Which city is hotter today and which is bigger: LA or NY?" [](#%5F%5Fcodelineno-3-31)) [](#%5F%5Fcodelineno-3-32)# Use GPT-4o [](#%5F%5Fcodelineno-3-33) [](#%5F%5Fcodelineno-3-34)configurable_model_with_tools.invoke( [](#%5F%5Fcodelineno-3-35) "Which city is hotter today and which is bigger: LA or NY?", [](#%5F%5Fcodelineno-3-36) config={"configurable": {"model": "claude-sonnet-4-5-20250929"}}, [](#%5F%5Fcodelineno-3-37)) [](#%5F%5Fcodelineno-3-38)# Use Sonnet 4.5