Chat - marimo (original) (raw)
The chat UI element provides an interactive chatbot interface for conversations. It can be customized with different models, including built-in AI models from popular providers or custom functions.
`` marimo.ui.chat ¶
`chat( model: Callable[ [list[ChatMessage], [ChatModelConfig](#marimo.ai.ChatModelConfig " marimo.ai.ChatModelConfig
dataclass(marimo._ai._types.ChatModelConfig)")], object ], *, prompts: list[str] | None = None, on_message: Callable[[list[ChatMessage]], None] | None = None, show_configuration_controls: bool = False, config: ChatModelConfigDict | None = DEFAULT_CONFIG, allow_attachments: bool | list[str] = False, max_height: int | None = None ) `
Bases: UIElement[dict[str, Any], list[[ChatMessage](#marimo.ai.ChatMessage " marimo.ai.ChatMessage (marimo._ai._types.ChatMessage)")]]
A chatbot UI element for interactive conversations.
Define a chatbot by implementing a function that takes a list of ChatMessages and optionally a config object as input, and returns the chat response. The response can be any object, including text, plots, or marimo UI elements.
Examples:
Using a custom model:
[](#%5F%5Fcodelineno-0-1)def my_rag_model(messages, config): [](#%5F%5Fcodelineno-0-2) # Each message has a `content` attribute, as well as a `role` [](#%5F%5Fcodelineno-0-3) # attribute ("user", "system", "assistant"); [](#%5F%5Fcodelineno-0-4) question = messages[-1].content [](#%5F%5Fcodelineno-0-5) docs = find_docs(question) [](#%5F%5Fcodelineno-0-6) prompt = template(question, docs, messages) [](#%5F%5Fcodelineno-0-7) response = query(prompt) [](#%5F%5Fcodelineno-0-8) if is_dataset(response): [](#%5F%5Fcodelineno-0-9) return dataset_to_chart(response) [](#%5F%5Fcodelineno-0-10) return response [](#%5F%5Fcodelineno-0-11) [](#%5F%5Fcodelineno-0-12) [](#%5F%5Fcodelineno-0-13)chat = mo.ui.chat(my_rag_model)
Async functions and async generators are also supported:
[](#%5F%5Fcodelineno-1-1)async def my_rag_model(messages): [](#%5F%5Fcodelineno-1-2) return await my_async_function(messages)
Regular (sync) generators for streaming:
[](#%5F%5Fcodelineno-2-1)def my_streaming_model(messages, config): [](#%5F%5Fcodelineno-2-2) for chunk in process_stream(): [](#%5F%5Fcodelineno-2-3) yield chunk # Each yield updates the UI
Async generators for streaming with async operations:
[](#%5F%5Fcodelineno-3-1)async def my_async_streaming_model(messages, config): [](#%5F%5Fcodelineno-3-2) async for chunk in async_process_stream(): [](#%5F%5Fcodelineno-3-3) yield chunk # Each yield updates the UI
The last value yielded by the generator is treated as the model response. Streaming responses are automatically streamed to the frontend as they are generated.
Using a built-in model:
[](#%5F%5Fcodelineno-4-1)chat = mo.ui.chat( [](#%5F%5Fcodelineno-4-2) mo.ai.llm.openai( [](#%5F%5Fcodelineno-4-3) "gpt-4o", [](#%5F%5Fcodelineno-4-4) system_message="You are a helpful assistant.", [](#%5F%5Fcodelineno-4-5) ), [](#%5F%5Fcodelineno-4-6))
Using attachments:
[](#%5F%5Fcodelineno-5-1)chat = mo.ui.chat( [](#%5F%5Fcodelineno-5-2) mo.ai.llm.openai( [](#%5F%5Fcodelineno-5-3) "gpt-4o", [](#%5F%5Fcodelineno-5-4) ), [](#%5F%5Fcodelineno-5-5) allow_attachments=["image/png", "image/jpeg"], [](#%5F%5Fcodelineno-5-6))
| ATTRIBUTE | DESCRIPTION |
|---|---|
| value | The current chat history, a list of ChatMessage objects. TYPE: List[ChatMessage] |
| PARAMETER | DESCRIPTION |
|---|---|
| model | A callable that takes in the chat history and returns a response. TYPE: Callable[[List[ChatMessage], ChatModelConfig], object] |
| prompts | Optional list of initial prompts to present to the user. Defaults to None. TYPE: List[str] DEFAULT: None |
| on_message | Optional callback function to handle new messages. Defaults to None. TYPE: Callable[[List[ChatMessage]], None] DEFAULT: None |
| show_configuration_controls | Whether to show the configuration controls. Defaults to False. TYPE: bool DEFAULT: False |
| config | Optional configuration to override the default configuration. Keys include: - max_tokens. The maximum number of tokens to generate. Defaults to 100. - temperature. Defaults to 0.5. - top_p. Defaults to 1. - top_k. Defaults to 40. - frequency_penalty. Defaults to 0. - presence_penalty. Defaults to 0. TYPE: ChatModelConfigDict DEFAULT: DEFAULT_CONFIG |
| allow_attachments | Allow attachments. True for any attachments types, or pass a list of mime types. Defaults to False. TYPE: bool | List[str] DEFAULT: False |
| max_height | Optional maximum height for the chat element. Defaults to None. TYPE: int DEFAULT: None |
`` text property ¶
A string of HTML representing this element.
`` value property writable ¶
The element's current value.
`` batch ¶
[](#%5F%5Fcodelineno-0-1)batch(**elements: UIElement[JSONType, object]) -> [batch](../batch/#marimo.ui.batch " marimo.ui.batch (marimo._plugins.ui._impl.batch.batch)")
Convert an HTML object with templated text into a UI element.
This method lets you create custom UI elements that are represented by arbitrary HTML.
Example
[](#%5F%5Fcodelineno-0-1)user_info = mo.md( [](#%5F%5Fcodelineno-0-2) ''' [](#%5F%5Fcodelineno-0-3) - What's your name?: {name} [](#%5F%5Fcodelineno-0-4) - When were you born?: {birthday} [](#%5F%5Fcodelineno-0-5) ''' [](#%5F%5Fcodelineno-0-6)).batch(name=mo.ui.text(), birthday=mo.ui.date())
In this example, user_info is a UI Element whose output is markdown and whose value is a dict with keys 'name' and 'birthday' (and values equal to the values of their corresponding elements).
| PARAMETER | DESCRIPTION |
|---|---|
| elements | the UI elements to interpolate into the HTML template. TYPE: UIElement[JSONType, object] DEFAULT: {} |
`` callout ¶
`callout( kind: Literal[ "neutral", "danger", "warn", "success", "info" ] = "neutral" ) -> [Html](../../html/#marimo.Html " marimo.Html
dataclass(marimo._output.hypertext.Html)") `
Create a callout containing this HTML element.
A callout wraps your HTML element in a raised box, emphasizing its importance. You can style the callout for different situations with thekind argument.
Examples:
[](#%5F%5Fcodelineno-0-1)mo.md("Hooray, you did it!").callout(kind="success")
[](#%5F%5Fcodelineno-1-1)mo.md("It's dangerous to go alone!").callout(kind="warn")
`` center ¶
Center an item.
Example
[](#%5F%5Fcodelineno-0-1)mo.md("# Hello, world").center()
| RETURNS | DESCRIPTION |
|---|---|
| Html | An Html object. |
`` form ¶
[](#%5F%5Fcodelineno-0-1)form( [](#%5F%5Fcodelineno-0-2) label: str = "", [](#%5F%5Fcodelineno-0-3) *, [](#%5F%5Fcodelineno-0-4) bordered: bool = True, [](#%5F%5Fcodelineno-0-5) loading: bool = False, [](#%5F%5Fcodelineno-0-6) submit_button_label: str = "Submit", [](#%5F%5Fcodelineno-0-7) submit_button_tooltip: str | None = None, [](#%5F%5Fcodelineno-0-8) submit_button_disabled: bool = False, [](#%5F%5Fcodelineno-0-9) clear_on_submit: bool = False, [](#%5F%5Fcodelineno-0-10) show_clear_button: bool = False, [](#%5F%5Fcodelineno-0-11) clear_button_label: str = "Clear", [](#%5F%5Fcodelineno-0-12) clear_button_tooltip: str | None = None, [](#%5F%5Fcodelineno-0-13) validate: Callable[[Optional[JSONType]], str | None] [](#%5F%5Fcodelineno-0-14) | None = None, [](#%5F%5Fcodelineno-0-15) on_change: Callable[[Optional[T]], None] | None = None [](#%5F%5Fcodelineno-0-16)) -> [form](../form/#marimo.ui.form " marimo.ui.form (marimo._plugins.ui._impl.input.form)")[S, T]
Create a submittable form out of this UIElement.
Creates a form that gates submission of a UIElement's value until a submit button is clicked. The form's value is the value of the underlying element from the last submission.
Examples:
Convert any UIElement into a form:
[](#%5F%5Fcodelineno-0-1)prompt = mo.ui.text_area().form()
Combine with HTML.batch to create a form made out of multiple UIElements:
[](#%5F%5Fcodelineno-1-1)form = ( [](#%5F%5Fcodelineno-1-2) mo.ui.md( [](#%5F%5Fcodelineno-1-3) ''' [](#%5F%5Fcodelineno-1-4) **Enter your prompt.** [](#%5F%5Fcodelineno-1-5) [](#%5F%5Fcodelineno-1-6) {prompt} [](#%5F%5Fcodelineno-1-7) [](#%5F%5Fcodelineno-1-8) **Choose a random seed.** [](#%5F%5Fcodelineno-1-9) [](#%5F%5Fcodelineno-1-10) {seed} [](#%5F%5Fcodelineno-1-11) ''' [](#%5F%5Fcodelineno-1-12) ) [](#%5F%5Fcodelineno-1-13) .batch( [](#%5F%5Fcodelineno-1-14) prompt=mo.ui.text_area(), [](#%5F%5Fcodelineno-1-15) seed=mo.ui.number(), [](#%5F%5Fcodelineno-1-16) ) [](#%5F%5Fcodelineno-1-17) .form() [](#%5F%5Fcodelineno-1-18))
| PARAMETER | DESCRIPTION | |
|---|---|---|
| label | A text label for the form. TYPE: str DEFAULT: '' | |
| bordered | Whether the form should have a border. TYPE: bool DEFAULT: True | |
| loading | Whether the form should be in a loading state. TYPE: bool DEFAULT: False | |
| submit_button_label | The label of the submit button. TYPE: str DEFAULT: 'Submit' | |
| submit_button_tooltip | The tooltip of the submit button. TYPE: str | None DEFAULT: None | |
| submit_button_disabled | Whether the submit button should be disabled. TYPE: bool DEFAULT: False | |
| clear_on_submit | Whether the form should clear its contents after submitting. TYPE: bool DEFAULT: False | |
| show_clear_button | Whether the form should show a clear button. TYPE: bool DEFAULT: False | |
| clear_button_label | The label of the clear button. TYPE: str DEFAULT: 'Clear' | |
| clear_button_tooltip | The tooltip of the clear button. TYPE: str | None DEFAULT: None | |
| validate | A function that takes the form's value and returns an error message if invalid, or None if valid. TYPE: Callable[[Optional[JSONType]], str | None] | None DEFAULT: None |
| on_change | Optional callback to run when this element's value changes. Defaults to None. TYPE: Callable[[Optional[T]], None] | None DEFAULT: None |
`` from_args classmethod ¶
[](#%5F%5Fcodelineno-0-1)from_args( [](#%5F%5Fcodelineno-0-2) data: dict[str, int], [](#%5F%5Fcodelineno-0-3) args: InitializationArgs[S, T], [](#%5F%5Fcodelineno-0-4) memo: dict[int, Any] | None = None, [](#%5F%5Fcodelineno-0-5) basis: UIElement[S, T] | None = None, [](#%5F%5Fcodelineno-0-6)) -> UIElement[S, T]
`` left ¶
Left-justify.
Example
[](#%5F%5Fcodelineno-0-1)mo.md("# Hello, world").left()
| RETURNS | DESCRIPTION |
|---|---|
| Html | An Html object. |
`` right ¶
Right-justify.
Example
[](#%5F%5Fcodelineno-0-1)mo.md("# Hello, world").right()
| RETURNS | DESCRIPTION |
|---|---|
| Html | An Html object. |
`` style ¶
`style( style: dict[str, Any] | None = None, **kwargs: Any ) -> [Html](../../html/#marimo.Html " marimo.Html
dataclass(marimo._output.hypertext.Html)") `
Wrap an object in a styled container.
Example
[](#%5F%5Fcodelineno-0-1)mo.md("...").style({"max-height": "300px", "overflow": "auto"}) [](#%5F%5Fcodelineno-0-2)mo.md("...").style(max_height="300px", overflow="auto")
| PARAMETER | DESCRIPTION |
|---|---|
| style | an optional dict of CSS styles, keyed by property name TYPE: dict[str, Any] | None DEFAULT: None |
| **kwargs | CSS styles as keyword arguments TYPE: Any DEFAULT: {} |
Basic Usage¶
Here's a simple example using a custom echo model:
[](#%5F%5Fcodelineno-1-1)import marimo as mo [](#%5F%5Fcodelineno-1-2) [](#%5F%5Fcodelineno-1-3)def echo_model(messages, config): [](#%5F%5Fcodelineno-1-4) return f"Echo: {messages[-1].content}" [](#%5F%5Fcodelineno-1-5) [](#%5F%5Fcodelineno-1-6)chat = mo.ui.chat(echo_model, prompts=["Hello", "How are you?"]) [](#%5F%5Fcodelineno-1-7)chat
Here, messages is a list of ChatMessage objects, which has role ("user", "assistant", or "system") and content (the message string) attributes; config is a[ChatModelConfig](#marimo.ai.ChatModelConfig " marimo.ai.ChatModelConfig
dataclass") object with various configuration parameters, which you are free to ignore.
Using a Built-in AI Model¶
You can use marimo's built-in AI models, such as OpenAI's GPT:
[](#%5F%5Fcodelineno-2-1)import marimo as mo [](#%5F%5Fcodelineno-2-2) [](#%5F%5Fcodelineno-2-3)chat = mo.ui.chat( [](#%5F%5Fcodelineno-2-4) mo.ai.llm.openai( [](#%5F%5Fcodelineno-2-5) "gpt-4", [](#%5F%5Fcodelineno-2-6) system_message="You are a helpful assistant.", [](#%5F%5Fcodelineno-2-7) ), [](#%5F%5Fcodelineno-2-8) show_configuration_controls=True [](#%5F%5Fcodelineno-2-9)) [](#%5F%5Fcodelineno-2-10)chat
Accessing Chat History¶
You can access the chat history using the value attribute:
This returns a list of ChatMessage objects, each containing role, content, and optional attachments attributes.
`` marimo.ai.ChatMessage ¶
Bases: Struct
A message in a chat.
`` attachments class-attribute instance-attribute ¶
`attachments: list[[ChatAttachment](#marimo.ai.ChatAttachment " marimo.ai.ChatAttachment
dataclass(marimo._ai._types.ChatAttachment)")] | None = None `
`` content instance-attribute ¶
`` parts class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)parts: list[ChatPart] | None = None
`` role instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)role: Literal['user', 'assistant', 'system']
Custom Model with Additional Context¶
Here's an example of a custom model that uses additional context:
[](#%5F%5Fcodelineno-4-1)import marimo as mo [](#%5F%5Fcodelineno-4-2) [](#%5F%5Fcodelineno-4-3)def rag_model(messages, config): [](#%5F%5Fcodelineno-4-4) question = messages[-1].content [](#%5F%5Fcodelineno-4-5) docs = find_relevant_docs(question) [](#%5F%5Fcodelineno-4-6) context = "\n".join(docs) [](#%5F%5Fcodelineno-4-7) prompt = f"Context: {context}\n\nQuestion: {question}\n\nAnswer:" [](#%5F%5Fcodelineno-4-8) response = query_llm(prompt, config) [](#%5F%5Fcodelineno-4-9) return response [](#%5F%5Fcodelineno-4-10) [](#%5F%5Fcodelineno-4-11)mo.ui.chat(rag_model)
This example demonstrates how you can implement a Retrieval-Augmented Generation (RAG) model within the chat interface.
Templated Prompts¶
You can pass sample prompts to mo.ui.chat to allow users to select from a list of predefined prompts. By including a {{var}} in the prompt, you can dynamically insert values into the prompt; a form will be generated to allow users to fill in the variables.
[](#%5F%5Fcodelineno-5-1)mo.ui.chat( [](#%5F%5Fcodelineno-5-2) mo.ai.llm.openai("gpt-4o"), [](#%5F%5Fcodelineno-5-3) prompts=[ [](#%5F%5Fcodelineno-5-4) "What is the capital of France?", [](#%5F%5Fcodelineno-5-5) "What is the capital of Germany?", [](#%5F%5Fcodelineno-5-6) "What is the capital of {{country}}?", [](#%5F%5Fcodelineno-5-7) ], [](#%5F%5Fcodelineno-5-8))
Including Attachments¶
You can allow users to upload attachments to their messages by passing anallow_attachments parameter to mo.ui.chat.
[](#%5F%5Fcodelineno-6-1)mo.ui.chat( [](#%5F%5Fcodelineno-6-2) rag_model, [](#%5F%5Fcodelineno-6-3) allow_attachments=["image/png", "image/jpeg"], [](#%5F%5Fcodelineno-6-4) # or True for any attachment type [](#%5F%5Fcodelineno-6-5) # allow_attachments=True, [](#%5F%5Fcodelineno-6-6))
Streaming Responses¶
Chatbots can stream responses in real-time, creating a more interactive experience similar to ChatGPT where you see the response appear word-by-word as it's generated.
Responses from built-in models (OpenAI, Anthropic, Google, Groq, Bedrock) are streamed by default.
How Streaming Works¶
marimo uses delta-based streaming, which follows the industry-standard pattern used by OpenAI, Anthropic, and other AI providers. Your generator function should yield individual chunks (deltas) of new content, which marimo automatically accumulates and displays progressively.
With Custom Models¶
For custom models, you can use either regular (sync) or async generator functions that yield delta chunks:
Sync generator (simpler):
[](#%5F%5Fcodelineno-7-1)import marimo as mo [](#%5F%5Fcodelineno-7-2)import time [](#%5F%5Fcodelineno-7-3) [](#%5F%5Fcodelineno-7-4)def streaming_model(messages, config): [](#%5F%5Fcodelineno-7-5) """Stream responses word by word.""" [](#%5F%5Fcodelineno-7-6) response = "This response will appear word by word!" [](#%5F%5Fcodelineno-7-7) words = response.split() [](#%5F%5Fcodelineno-7-8) [](#%5F%5Fcodelineno-7-9) for word in words: [](#%5F%5Fcodelineno-7-10) yield word + " " # Yield delta chunks [](#%5F%5Fcodelineno-7-11) time.sleep(0.1) # Simulate processing delay [](#%5F%5Fcodelineno-7-12) [](#%5F%5Fcodelineno-7-13)chat = mo.ui.chat(streaming_model) [](#%5F%5Fcodelineno-7-14)chat
Async generator (for async operations):
[](#%5F%5Fcodelineno-8-1)import marimo as mo [](#%5F%5Fcodelineno-8-2)import asyncio [](#%5F%5Fcodelineno-8-3) [](#%5F%5Fcodelineno-8-4)async def async_streaming_model(messages, config): [](#%5F%5Fcodelineno-8-5) """Stream responses word by word asynchronously.""" [](#%5F%5Fcodelineno-8-6) response = "This response will appear word by word!" [](#%5F%5Fcodelineno-8-7) words = response.split() [](#%5F%5Fcodelineno-8-8) [](#%5F%5Fcodelineno-8-9) for word in words: [](#%5F%5Fcodelineno-8-10) yield word + " " # Yield delta chunks [](#%5F%5Fcodelineno-8-11) await asyncio.sleep(0.1) # Async processing delay [](#%5F%5Fcodelineno-8-12) [](#%5F%5Fcodelineno-8-13)chat = mo.ui.chat(async_streaming_model) [](#%5F%5Fcodelineno-8-14)chat
Each yield sends a new chunk (delta) to marimo, which accumulates and displays the progressively building response in real-time.
Delta vs Accumulated
Yield deltas, not accumulated text. Each yield should be new content only:
✅ Correct (delta mode):
[](#%5F%5Fcodelineno-9-1)yield "Hello" [](#%5F%5Fcodelineno-9-2)yield " " [](#%5F%5Fcodelineno-9-3)yield "world" [](#%5F%5Fcodelineno-9-4)# Result: "Hello world"
❌ Incorrect (accumulated mode, deprecated):
[](#%5F%5Fcodelineno-10-1)yield "Hello" [](#%5F%5Fcodelineno-10-2)yield "Hello " [](#%5F%5Fcodelineno-10-3)yield "Hello world" [](#%5F%5Fcodelineno-10-4)# Inefficient: sends duplicate content
Delta mode is more efficient (reduces bandwidth by ~99% for long responses) and aligns with standard streaming APIs.
Built-in Models¶
marimo provides several built-in AI models that you can use with the chat UI element.
OpenAI¶
[](#%5F%5Fcodelineno-11-1)import marimo as mo [](#%5F%5Fcodelineno-11-2) [](#%5F%5Fcodelineno-11-3)mo.ui.chat( [](#%5F%5Fcodelineno-11-4) mo.ai.llm.openai( [](#%5F%5Fcodelineno-11-5) "gpt-4o", [](#%5F%5Fcodelineno-11-6) system_message="You are a helpful assistant.", [](#%5F%5Fcodelineno-11-7) api_key="sk-proj-...", [](#%5F%5Fcodelineno-11-8) ), [](#%5F%5Fcodelineno-11-9) show_configuration_controls=True [](#%5F%5Fcodelineno-11-10))
`` marimo.ai.llm.openai ¶
[](#%5F%5Fcodelineno-0-1)openai( [](#%5F%5Fcodelineno-0-2) model: str, [](#%5F%5Fcodelineno-0-3) *, [](#%5F%5Fcodelineno-0-4) system_message: str = DEFAULT_SYSTEM_MESSAGE, [](#%5F%5Fcodelineno-0-5) api_key: str | None = None, [](#%5F%5Fcodelineno-0-6) base_url: str | None = None [](#%5F%5Fcodelineno-0-7))
Bases: ChatModel
OpenAI ChatModel
| PARAMETER | DESCRIPTION |
|---|---|
| model | The model to use. Can be found on the OpenAI models page TYPE: str |
| system_message | The system message to use TYPE: str DEFAULT: DEFAULT_SYSTEM_MESSAGE |
| api_key | The API key to use. If not provided, the API key will be retrieved from the OPENAI_API_KEY environment variable or the user's config. TYPE: str | None DEFAULT: None |
| base_url | The base URL to use TYPE: str | None DEFAULT: None |
`` api_key instance-attribute ¶
`` base_url instance-attribute ¶
`` model instance-attribute ¶
`` system_message instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)system_message = system_message
Anthropic¶
[](#%5F%5Fcodelineno-12-1)import marimo as mo [](#%5F%5Fcodelineno-12-2) [](#%5F%5Fcodelineno-12-3)mo.ui.chat( [](#%5F%5Fcodelineno-12-4) mo.ai.llm.anthropic( [](#%5F%5Fcodelineno-12-5) "claude-3-5-sonnet-20240620", [](#%5F%5Fcodelineno-12-6) system_message="You are a helpful assistant.", [](#%5F%5Fcodelineno-12-7) api_key="sk-ant-...", [](#%5F%5Fcodelineno-12-8) ), [](#%5F%5Fcodelineno-12-9) show_configuration_controls=True [](#%5F%5Fcodelineno-12-10))
`` marimo.ai.llm.anthropic ¶
[](#%5F%5Fcodelineno-0-1)anthropic( [](#%5F%5Fcodelineno-0-2) model: str, [](#%5F%5Fcodelineno-0-3) *, [](#%5F%5Fcodelineno-0-4) system_message: str = DEFAULT_SYSTEM_MESSAGE, [](#%5F%5Fcodelineno-0-5) api_key: str | None = None, [](#%5F%5Fcodelineno-0-6) base_url: str | None = None [](#%5F%5Fcodelineno-0-7))
Bases: ChatModel
Anthropic ChatModel
| PARAMETER | DESCRIPTION |
|---|---|
| model | The model to use. Can be found on the Anthropic models page TYPE: str |
| system_message | The system message to use TYPE: str DEFAULT: DEFAULT_SYSTEM_MESSAGE |
| api_key | The API key to use. If not provided, the API key will be retrieved from the ANTHROPIC_API_KEY environment variable or the user's config. TYPE: str | None DEFAULT: None |
| base_url | The base URL to use TYPE: str | None DEFAULT: None |
`` api_key instance-attribute ¶
`` base_url instance-attribute ¶
`` model instance-attribute ¶
`` system_message instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)system_message = system_message
`` supports_temperature ¶
[](#%5F%5Fcodelineno-0-1)supports_temperature(model: str) -> bool
Google AI¶
[](#%5F%5Fcodelineno-13-1)import marimo as mo [](#%5F%5Fcodelineno-13-2) [](#%5F%5Fcodelineno-13-3)mo.ui.chat( [](#%5F%5Fcodelineno-13-4) mo.ai.llm.google( [](#%5F%5Fcodelineno-13-5) "gemini-1.5-pro-latest", [](#%5F%5Fcodelineno-13-6) system_message="You are a helpful assistant.", [](#%5F%5Fcodelineno-13-7) api_key="AI..", [](#%5F%5Fcodelineno-13-8) ), [](#%5F%5Fcodelineno-13-9) show_configuration_controls=True [](#%5F%5Fcodelineno-13-10))
`` marimo.ai.llm.google ¶
[](#%5F%5Fcodelineno-0-1)google( [](#%5F%5Fcodelineno-0-2) model: str, [](#%5F%5Fcodelineno-0-3) *, [](#%5F%5Fcodelineno-0-4) system_message: str = DEFAULT_SYSTEM_MESSAGE, [](#%5F%5Fcodelineno-0-5) api_key: str | None = None [](#%5F%5Fcodelineno-0-6))
Bases: ChatModel
Google AI ChatModel
| PARAMETER | DESCRIPTION |
|---|---|
| model | The model to use. Can be found on the Gemini models page TYPE: str |
| system_message | The system message to use TYPE: str DEFAULT: DEFAULT_SYSTEM_MESSAGE |
| api_key | The API key to use. If not provided, the API key will be retrieved from the GOOGLE_AI_API_KEY environment variable or the user's config. TYPE: str | None DEFAULT: None |
`` api_key instance-attribute ¶
`` model instance-attribute ¶
`` system_message instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)system_message = system_message
Groq¶
[](#%5F%5Fcodelineno-14-1)import marimo as mo [](#%5F%5Fcodelineno-14-2) [](#%5F%5Fcodelineno-14-3)mo.ui.chat( [](#%5F%5Fcodelineno-14-4) mo.ai.llm.groq( [](#%5F%5Fcodelineno-14-5) "llama-3.1-70b-versatile", [](#%5F%5Fcodelineno-14-6) system_message="You are a helpful assistant.", [](#%5F%5Fcodelineno-14-7) api_key="gsk-...", [](#%5F%5Fcodelineno-14-8) ), [](#%5F%5Fcodelineno-14-9) show_configuration_controls=True [](#%5F%5Fcodelineno-14-10))
`` marimo.ai.llm.groq ¶
[](#%5F%5Fcodelineno-0-1)groq( [](#%5F%5Fcodelineno-0-2) model: str, [](#%5F%5Fcodelineno-0-3) *, [](#%5F%5Fcodelineno-0-4) system_message: str = DEFAULT_SYSTEM_MESSAGE, [](#%5F%5Fcodelineno-0-5) api_key: str | None = None, [](#%5F%5Fcodelineno-0-6) base_url: str | None = None [](#%5F%5Fcodelineno-0-7))
Bases: ChatModel
Groq ChatModel
| PARAMETER | DESCRIPTION |
|---|---|
| model | The model to use. Can be found on the Groq models page TYPE: str |
| system_message | The system message to use TYPE: str DEFAULT: DEFAULT_SYSTEM_MESSAGE |
| api_key | The API key to use. If not provided, the API key will be retrieved from the GROQ_API_KEY environment variable or the user's config. TYPE: str | None DEFAULT: None |
| base_url | The base URL to use TYPE: str | None DEFAULT: None |
`` api_key instance-attribute ¶
`` base_url instance-attribute ¶
`` model instance-attribute ¶
`` system_message instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)system_message = system_message
Types¶
Chatbots can be implemented with a function that receives a list ofChatMessage objects and a[ChatModelConfig](#marimo.ai.ChatModelConfig " marimo.ai.ChatModelConfig
dataclass").
`` marimo.ai.ChatMessage ¶
Bases: Struct
A message in a chat.
`` attachments class-attribute instance-attribute ¶
`attachments: list[[ChatAttachment](#marimo.ai.ChatAttachment " marimo.ai.ChatAttachment
dataclass(marimo._ai._types.ChatAttachment)")] | None = None `
`` content instance-attribute ¶
`` parts class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)parts: list[ChatPart] | None = None
`` role instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)role: Literal['user', 'assistant', 'system']
`` marimo.ai.ChatModelConfig dataclass ¶
[](#%5F%5Fcodelineno-0-1)ChatModelConfig( [](#%5F%5Fcodelineno-0-2) max_tokens: int | None = None, [](#%5F%5Fcodelineno-0-3) temperature: float | None = None, [](#%5F%5Fcodelineno-0-4) top_p: float | None = None, [](#%5F%5Fcodelineno-0-5) top_k: int | None = None, [](#%5F%5Fcodelineno-0-6) frequency_penalty: float | None = None, [](#%5F%5Fcodelineno-0-7) presence_penalty: float | None = None, [](#%5F%5Fcodelineno-0-8))
`` frequency_penalty class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)frequency_penalty: float | None = None
`` max_tokens class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)max_tokens: int | None = None
`` presence_penalty class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)presence_penalty: float | None = None
`` temperature class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)temperature: float | None = None
`` top_k class-attribute instance-attribute ¶
`` top_p class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)top_p: float | None = None
mo.ui.chat can be instantiated with an initial configuration with a dictionary conforming to the config.
ChatMessages can also include attachments.
`` marimo.ai.ChatAttachment dataclass ¶
[](#%5F%5Fcodelineno-0-1)ChatAttachment( [](#%5F%5Fcodelineno-0-2) url: str, [](#%5F%5Fcodelineno-0-3) name: str = "attachment", [](#%5F%5Fcodelineno-0-4) content_type: str | None = None, [](#%5F%5Fcodelineno-0-5))
`` content_type class-attribute instance-attribute ¶
[](#%5F%5Fcodelineno-0-1)content_type: str | None = None
`` name class-attribute instance-attribute ¶
`` url instance-attribute ¶
Supported Model Providers¶
We support any OpenAI-compatible endpoint. If you want any specific provider added explicitly (ones that don't abide by the standard OpenAI API format), you can file a feature request.
Normally, overriding the base_url parameter should work. Here are some examples:
CerebrasGroqxAI
[](#%5F%5Fcodelineno-15-1)chatbot = mo.ui.chat( [](#%5F%5Fcodelineno-15-2) mo.ai.llm.openai( [](#%5F%5Fcodelineno-15-3) model="llama3.1-8b", [](#%5F%5Fcodelineno-15-4) api_key="csk-...", # insert your key here [](#%5F%5Fcodelineno-15-5) base_url="https://api.cerebras.ai/v1/", [](#%5F%5Fcodelineno-15-6) ), [](#%5F%5Fcodelineno-15-7)) [](#%5F%5Fcodelineno-15-8)chatbot
[](#%5F%5Fcodelineno-16-1)chatbot = mo.ui.chat( [](#%5F%5Fcodelineno-16-2) mo.ai.llm.openai( [](#%5F%5Fcodelineno-16-3) model="llama-3.1-70b-versatile", [](#%5F%5Fcodelineno-16-4) api_key="gsk_...", # insert your key here [](#%5F%5Fcodelineno-16-5) base_url="https://api.groq.com/openai/v1/", [](#%5F%5Fcodelineno-16-6) ), [](#%5F%5Fcodelineno-16-7)) [](#%5F%5Fcodelineno-16-8)chatbot
[](#%5F%5Fcodelineno-17-1)chatbot = mo.ui.chat( [](#%5F%5Fcodelineno-17-2) mo.ai.llm.openai( [](#%5F%5Fcodelineno-17-3) model="grok-beta", [](#%5F%5Fcodelineno-17-4) api_key=key, # insert your key here [](#%5F%5Fcodelineno-17-5) base_url="https://api.x.ai/v1", [](#%5F%5Fcodelineno-17-6) ), [](#%5F%5Fcodelineno-17-7)) [](#%5F%5Fcodelineno-17-8)chatbot
Note
We have added examples for GROQ and Cerebras. These providers offer free API keys and are great for trying out Llama models (from Meta). You can sign up on their platforms and integrate with various AI integrations in marimo easily. For more information, refer to the AI completion documentation in marimo.