Mistral AI API | Mistral AI Large Language Models (original) (raw)

Mistral AI API (1.0.0)

Download OpenAPI specification:Download

Our Chat Completion and Embeddings APIs specification. Create your account on La Plateforme to get access and read the docs to learn how to use it.

Chat

Chat Completion

Request Body schema: application/json

required

modelrequired string (Model) ID of the model to use. You can use the List Available Models API to see all of your available models, or see our Model overview for model descriptions.
Temperature (number) or Temperature (null) (Temperature) What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. The default value varies depending on the model you are targeting. Call the /models endpoint to retrieve the appropriate value.
top_p number (Top P) [ 0 .. 1 ] Default: 1 Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Max Tokens (integer) or Max Tokens (null) (Max Tokens) The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
stream boolean (Stream) Default: false Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
Stop (string) or Array of Stop (strings) (Stop) Stop generation if this token is detected. Or if one of these tokens is detected when providing an array
Random Seed (integer) or Random Seed (null) (Random Seed) The seed to use for random sampling. If set, different calls will generate deterministic results.
required Array of any (Messages) The prompt(s) to generate completions for, encoded as a list of dict with role and content.
object (ResponseFormat)
Array of Tools (objects) or Tools (null) (Tools)
ToolChoice (object) or ToolChoiceEnum (string) (Tool Choice) Default: "auto"
presence_penalty number (Presence Penalty) [ -2 .. 2 ] Default: 0 presence_penalty determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative.
frequency_penalty number (Frequency Penalty) [ -2 .. 2 ] Default: 0 frequency_penalty penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition.
N (integer) or N (null) (N) Number of completions to return for each request, input tokens are only billed once.
object (Prediction) Default: {"type":"content","content":""} Enable users to specify expected results, optimizing response times by leveraging known or predictable content. This approach is especially effective for updating text documents or code files with minimal changes, reducing latency while maintaining high-quality results.
parallel_tool_calls boolean (Parallel Tool Calls) Default: true
safe_prompt boolean Default: false Whether to inject a safety prompt before all conversations.

Responses

Request samples

Content type

application/json

`{

Response samples

`{

FIM

Fim Completion

Request Body schema: application/json

required

modelrequired string (Model) Default: "codestral-2405" ID of the model to use. Only compatible for now with: codestral-2405 codestral-latest
Temperature (number) or Temperature (null) (Temperature) What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. The default value varies depending on the model you are targeting. Call the /models endpoint to retrieve the appropriate value.
top_p number (Top P) [ 0 .. 1 ] Default: 1 Nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Max Tokens (integer) or Max Tokens (null) (Max Tokens) The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
stream boolean (Stream) Default: false Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
Stop (string) or Array of Stop (strings) (Stop) Stop generation if this token is detected. Or if one of these tokens is detected when providing an array
Random Seed (integer) or Random Seed (null) (Random Seed) The seed to use for random sampling. If set, different calls will generate deterministic results.
promptrequired string (Prompt) The text/code to complete.
Suffix (string) or Suffix (null) (Suffix) Default: "" Optional text/code that adds more context for the model. When given a prompt and a suffix the model will fill what is between them. When suffix is not provided, the model will simply execute completion starting with prompt.
Min Tokens (integer) or Min Tokens (null) (Min Tokens) The minimum number of tokens to generate in the completion.

Responses

Request samples

Content type

application/json

`{

Response samples

`{

Agents

Agents Completion

Request Body schema: application/json

required

Max Tokens (integer) or Max Tokens (null) (Max Tokens) The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.
stream boolean (Stream) Default: false Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
Stop (string) or Array of Stop (strings) (Stop) Stop generation if this token is detected. Or if one of these tokens is detected when providing an array
Random Seed (integer) or Random Seed (null) (Random Seed) The seed to use for random sampling. If set, different calls will generate deterministic results.
required Array of any (Messages) The prompt(s) to generate completions for, encoded as a list of dict with role and content.
object (ResponseFormat)
Array of Tools (objects) or Tools (null) (Tools)
ToolChoice (object) or ToolChoiceEnum (string) (Tool Choice) Default: "auto"
presence_penalty number (Presence Penalty) [ -2 .. 2 ] Default: 0 presence_penalty determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative.
frequency_penalty number (Frequency Penalty) [ -2 .. 2 ] Default: 0 frequency_penalty penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition.
N (integer) or N (null) (N) Number of completions to return for each request, input tokens are only billed once.
object (Prediction) Default: {"type":"content","content":""} Enable users to specify expected results, optimizing response times by leveraging known or predictable content. This approach is especially effective for updating text documents or code files with minimal changes, reducing latency while maintaining high-quality results.
parallel_tool_calls boolean (Parallel Tool Calls) Default: true
agent_idrequired string The ID of the agent to use for this completion.

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Embeddings

Embeddings

Request Body schema: application/json

required

modelrequired
required Input (string) or Array of Input (strings) (Input)
Output Dimension (integer) or Output Dimension (null) (Output Dimension) The dimension of the output embeddings.
output_dtype string (EmbeddingDtype) Default: "float" Enum: "float" "int8" "uint8" "binary" "ubinary" The data type of the output embeddings.

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Classifiers

Moderations

Request Body schema: application/json

required

modelrequired
required Input (string) or Array of Input (strings) (Input)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Chat Moderations

Request Body schema: application/json

required

required Array of Input (any) or Array of Input (any) (Input)
modelrequired

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Classifications

Request Body schema: application/json

required

modelrequired
required Input (string) or Array of Input (strings) (Input)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Chat Classifications

Request Body schema: application/json

required

modelrequired
required InstructRequest (object) or Array of ChatClassificationRequestInputs (objects) (ChatClassificationRequestInputs)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Files

Upload File

Upload a file that can be used across various endpoints.

The size of individual files can be a maximum of 512 MB. The Fine-tuning API only supports .jsonl files.

Please contact us if you need to increase these storage limits.

Request Body schema: multipart/form-data

required

filerequired string <binary> (File) The File object (not file name) to be uploaded. To upload a file and specify a custom file name you should format your request as such: file=@path/to/your/file.jsonl;filename=custom_name.jsonl Otherwise, you can just keep the original file name: file=@path/to/your/file.jsonl
purpose string (FilePurpose) Enum: "fine-tune" "batch" "ocr"

Responses

Response samples

Content type

application/json

`{

List Files

Returns a list of files that belong to the user's organization.

query Parameters
page integer (Page) Default: 0
page_size integer (Page Size) Default: 100
Array of Sample Type (strings) or Sample Type (null) (Sample Type)
Array of Source (strings) or Source (null) (Source)
Search (string) or Search (null) (Search)
FilePurpose (string) or null

Responses

Response samples

Content type

application/json

`{

Retrieve File

Returns information about a specific file.

path Parameters
file_idrequired

Responses

Response samples

Content type

application/json

`{

Delete File

path Parameters
file_idrequired

Responses

Response samples

Content type

application/json

`{

Download File

path Parameters
file_idrequired

Responses

Get Signed Url

path Parameters
file_idrequired
query Parameters
expiry integer (Expiry) Default: 24 Number of hours before the url becomes invalid. Defaults to 24h

Responses

Response samples

Content type

application/json

Fine Tuning

Get Fine Tuning Jobs

Get a list of fine-tuning jobs for your organization and user.

query Parameters
page integer (Page) Default: 0 The page number of the results to be returned.
page_size integer (Page Size) Default: 100 The number of items to return per page.
Model (string) or Model (null) (Model) The model name used for fine-tuning to filter on. When set, the other results are not displayed.
Created After (string) or Created After (null) (Created After) The date/time to filter on. When set, the results for previous creation times are not displayed.
Created Before (string) or Created Before (null) (Created Before)
created_by_me boolean (Created By Me) Default: false When set, only return results for jobs created by the API caller. Other results are not displayed.
Status (string) or Status (null) (Status) The current job state to filter on. When set, the other results are not displayed.
Wandb Project (string) or Wandb Project (null) (Wandb Project) The Weights and Biases project to filter on. When set, the other results are not displayed.
Wandb Name (string) or Wandb Name (null) (Wandb Name) The Weight and Biases run name to filter on. When set, the other results are not displayed.
Suffix (string) or Suffix (null) (Suffix) The model suffix to filter on. When set, the other results are not displayed.

Responses

Response samples

Content type

application/json

`{

Create Fine Tuning Job

Create a new fine-tuning job, it will be queued for processing.

query Parameters
Dry Run (boolean) or Dry Run (null) (Dry Run) If true the job is not spawned, instead the query returns a handful of useful metadata for the user to perform sanity checks (see LegacyJobMetadataOut response). Otherwise, the job is started and the query returns the job ID along with some of the input parameters (see JobOut response).
Request Body schema: application/json

required

modelrequired string (FineTuneableModel) Enum: "open-mistral-7b" "mistral-small-latest" "codestral-latest" "mistral-large-latest" "open-mistral-nemo" "ministral-3b-latest" "ministral-8b-latest" The name of the model to fine-tune.
Array of objects (Training Files) Default: []
Array of Validation Files (strings) or Validation Files (null) (Validation Files) A list containing the IDs of uploaded files that contain validation data. If you provide these files, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in checkpoints when getting the status of a running fine-tuning job. The same data should not be present in both train and validation files.
Suffix (string) or Suffix (null) (Suffix) A string that will be added to your fine-tuning model name. For example, a suffix of "my-great-model" would produce a model name like ft:open-mistral-7b:my-great-model:xxx...
Array of Integrations (any) or Integrations (null) (Integrations) A list of integrations to enable for your fine-tuning job.
auto_start boolean (Auto Start) This field will be required in a future release.
invalid_sample_skip_percentage number (Invalid Sample Skip Percentage) [ 0 .. 0.5 ] Default: 0
FineTuneableModelType (string) or null
required CompletionTrainingParametersIn (object) or ClassifierTrainingParametersIn (object) (Hyperparameters)
Array of Repositories (any) or Repositories (null) (Repositories)
Array of Classifier Targets (objects) or Classifier Targets (null) (Classifier Targets)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Get Fine Tuning Job

Get a fine-tuned job details by its UUID.

path Parameters
job_idrequired string <uuid> (Job Id) The ID of the job to analyse.

Responses

Response samples

Content type

application/json

`{

Cancel Fine Tuning Job

Request the cancellation of a fine tuning job.

path Parameters
job_idrequired string <uuid> (Job Id) The ID of the job to cancel.

Responses

Response samples

Content type

application/json

`{

Start Fine Tuning Job

Request the start of a validated fine tuning job.

path Parameters
job_idrequired

Responses

Response samples

Content type

application/json

`{

List Models

List all models available to the user.

Responses

Response samples

Content type

application/json

`{

Retrieve Model

Retrieve a model information.

path Parameters
model_idrequired string (Model Id) Example: ft:open-mistral-7b:587a6b29:20240514:7e773925The ID of the model to retrieve.

Responses

Response samples

Content type

application/json

`{

Delete Model

Delete a fine-tuned model.

path Parameters
model_idrequired string (Model Id) Example: ft:open-mistral-7b:587a6b29:20240514:7e773925The ID of the model to delete.

Responses

Response samples

Content type

application/json

`{

Update Fine Tuned Model

Update a model name or description.

path Parameters
model_idrequired string (Model Id) Example: ft:open-mistral-7b:587a6b29:20240514:7e773925The ID of the model to update.
Request Body schema: application/json

required

Name (string) or Name (null) (Name)
Description (string) or Description (null) (Description)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Archive Fine Tuned Model

Archive a fine-tuned model.

path Parameters
model_idrequired string (Model Id) Example: ft:open-mistral-7b:587a6b29:20240514:7e773925The ID of the model to archive.

Responses

Response samples

Content type

application/json

`{

Unarchive Fine Tuned Model

Un-archive a fine-tuned model.

path Parameters
model_idrequired string (Model Id) Example: ft:open-mistral-7b:587a6b29:20240514:7e773925The ID of the model to unarchive.

Responses

Response samples

Content type

application/json

`{

Batch

Get Batch Jobs

Get a list of batch jobs for your organization and user.

query Parameters
page integer (Page) Default: 0
page_size integer (Page Size) Default: 100
Model (string) or Model (null) (Model)
Metadata (object) or Metadata (null) (Metadata)
Created After (string) or Created After (null) (Created After)
created_by_me boolean (Created By Me) Default: false
Array of Status (strings) or Status (null) (Status)

Responses

Response samples

Content type

application/json

`{

Create Batch Job

Create a new batch job, it will be queued for processing.

Request Body schema: application/json

required

input_filesrequired Array of strings <uuid> (Input Files) [ items <uuid > ]
endpointrequired string (ApiEndpoint) Enum: "/v1/chat/completions" "/v1/embeddings" "/v1/fim/completions" "/v1/moderations" "/v1/chat/moderations"
modelrequired
Metadata (object) or Metadata (null) (Metadata)
timeout_hours integer (Timeout Hours) Default: 24

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Get Batch Job

Get a batch job details by its UUID.

path Parameters
job_idrequired

Responses

Response samples

Content type

application/json

`{

Cancel Batch Job

Request the cancellation of a batch job.

path Parameters
job_idrequired

Responses

Response samples

Content type

application/json

`{

OCR API

OCR

Request Body schema: application/json

required

required Model (string) or Model (null) (Model)
id string (Id)
required DocumentURLChunk (object) or ImageURLChunk (object) (Document) Document to run OCR on
Array of Pages (integers) or Pages (null) (Pages) Specific pages user wants to process in various formats: single number, range, or list of both. Starts from 0
Include Image Base64 (boolean) or Include Image Base64 (null) (Include Image Base64) Include image URLs in response
Image Limit (integer) or Image Limit (null) (Image Limit) Max images to extract
Image Min Size (integer) or Image Min Size (null) (Image Min Size) Minimum height and width of image to extract
ResponseFormat (object) or null Structured output class for extracting useful information from each extracted bounding box / image from document. Only json_schema is valid for this field
ResponseFormat (object) or null Structured output class for extracting useful information from the entire document. Only json_schema is valid for this field

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

(beta) Agents API

Create a agent that can be used within a conversation.

Create a new agent giving it instructions, tools, description. The agent is then available to be used as a regular assistant in a conversation or as part of an agent pool from which it can be used.

Request Body schema: application/json

required

Instructions (string) or Instructions (null) (Instructions) Instruction prompt the model will follow during the conversation.
Array of any (Tools) List of tools which are available to the model during the conversation.
object (CompletionArgs) Completion arguments that will be used to generate assistant responses. Can be overridden at each message request.
modelrequired string (Model)
namerequired string (Name)
Description (string) or Description (null) (Description)
Array of Handoffs (strings) or Handoffs (null) (Handoffs)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

List agent entities.

Retrieve a list of agent entities sorted by creation time.

query Parameters
page integer (Page) Default: 0
page_size integer (Page Size) Default: 20

Responses

Response samples

Content type

application/json

`[

Retrieve an agent entity.

Given an agent retrieve an agent entity with its attributes.

path Parameters
agent_idrequired

Responses

Response samples

Content type

application/json

`{

Update an agent entity.

Update an agent attributes and create a new version.

path Parameters
agent_idrequired
Request Body schema: application/json

required

Instructions (string) or Instructions (null) (Instructions) Instruction prompt the model will follow during the conversation.
Array of any (Tools) List of tools which are available to the model during the conversation.
object (CompletionArgs) Completion arguments that will be used to generate assistant responses. Can be overridden at each message request.
Model (string) or Model (null) (Model)
Name (string) or Name (null) (Name)
Description (string) or Description (null) (Description)
Array of Handoffs (strings) or Handoffs (null) (Handoffs)

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Update an agent version.

Switch the version of an agent.

path Parameters
agent_idrequired
query Parameters
versionrequired

Responses

Response samples

Content type

application/json

`{

(beta) Conversations API

Create a conversation and append entries to it.

Create a new conversation, using a base model or an agent and append entries. Completion and tool executions are run and the response is appended to the conversation.Use the returned conversation_id to continue the conversation.

Request Body schema: application/json

required

required ConversationInputs (string) or (Array of InputEntries (MessageInputEntry (object) or FunctionResultEntry (object))) (ConversationInputs)
Stream (boolean) or Stream (boolean) (Stream) Default: false Value: false
Store (boolean) or Store (null) (Store) Default: null
Handoff Execution (string) or Handoff Execution (null) (Handoff Execution) Default: null
Instructions (string) or Instructions (null) (Instructions) Default: null
Array of Tools (any) or Tools (null) (Tools) Default: null
CompletionArgs (object) or null Default: null
Name (string) or Name (null) (Name) Default: null
Description (string) or Description (null) (Description) Default: null
Agent Id (string) or Agent Id (null) (Agent Id) Default: null
Model (string) or Model (null) (Model) Default: null

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

List all created conversations.

Retrieve a list of conversation entities sorted by creation time.

query Parameters
page integer (Page) Default: 0
page_size integer (Page Size) Default: 100

Responses

Response samples

Content type

application/json

`[

Retrieve a conversation information.

Given a conversation_id retrieve a conversation entity with its attributes.

path Parameters
conversation_idrequired

Responses

Response samples

Content type

application/json

`{

Append new entries to an existing conversation.

Run completion on the history of the conversation and the user entries. Return the new created entries.

path Parameters
conversation_idrequired string (Conversation Id) ID of the conversation to which we append entries.
Request Body schema: application/json

required

required ConversationInputs (string) or (Array of InputEntries (MessageInputEntry (object) or FunctionResultEntry (object))) (ConversationInputs)
stream boolean (Stream) Default: false Value: false Whether to stream back partial progress. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
store boolean (Store) Default: true Whether to store the results into our servers or not.
handoff_execution string (Handoff Execution) Default: "server" Enum: "client" "server"
object (CompletionArgs) Completion arguments that will be used to generate assistant responses. Can be overridden at each message request.

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Retrieve all entries in a conversation.

Given a conversation_id retrieve all the entries belonging to that conversation. The entries are sorted in the order they were appended, those can be messages, connectors or function_call.

path Parameters
conversation_idrequired

Responses

Response samples

Content type

application/json

`{

Retrieve all messages in a conversation.

Given a conversation_id retrieve all the messages belonging to that conversation. This is similar to retrieving all entries except we filter the messages only.

path Parameters
conversation_idrequired

Responses

Response samples

Content type

application/json

`{

Restart a conversation starting from a given entry.

Given a conversation_id and an id, recreate a conversation from this point and run completion. A new conversation is returned with the new entries returned.

path Parameters
conversation_idrequired
Request Body schema: application/json

required

required ConversationInputs (string) or (Array of InputEntries (MessageInputEntry (object) or FunctionResultEntry (object))) (ConversationInputs)
stream boolean (Stream) Default: false Value: false Whether to stream back partial progress. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
store boolean (Store) Default: true Whether to store the results into our servers or not.
handoff_execution string (Handoff Execution) Default: "server" Enum: "client" "server"
from_entry_idrequired string (From Entry Id)
object (CompletionArgs) Completion arguments that will be used to generate assistant responses. Can be overridden at each message request.

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Create a conversation and append entries to it.

Create a new conversation, using a base model or an agent and append entries. Completion and tool executions are run and the response is appended to the conversation.Use the returned conversation_id to continue the conversation.

Request Body schema: application/json

required

required ConversationInputs (string) or (Array of InputEntries (MessageInputEntry (object) or FunctionResultEntry (object))) (ConversationInputs)
Stream (boolean) or Stream (boolean) (Stream) Default: true Value: true
Store (boolean) or Store (null) (Store) Default: null
Handoff Execution (string) or Handoff Execution (null) (Handoff Execution) Default: null
Instructions (string) or Instructions (null) (Instructions) Default: null
Array of Tools (any) or Tools (null) (Tools) Default: null
CompletionArgs (object) or null Default: null
Name (string) or Name (null) (Name) Default: null
Description (string) or Description (null) (Description) Default: null
Agent Id (string) or Agent Id (null) (Agent Id) Default: null
Model (string) or Model (null) (Model) Default: null

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Append new entries to an existing conversation.

Run completion on the history of the conversation and the user entries. Return the new created entries.

path Parameters
conversation_idrequired string (Conversation Id) ID of the conversation to which we append entries.
Request Body schema: application/json

required

required ConversationInputs (string) or (Array of InputEntries (MessageInputEntry (object) or FunctionResultEntry (object))) (ConversationInputs)
stream boolean (Stream) Default: true Value: true Whether to stream back partial progress. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
store boolean (Store) Default: true Whether to store the results into our servers or not.
handoff_execution string (Handoff Execution) Default: "server" Enum: "client" "server"
object (CompletionArgs) Completion arguments that will be used to generate assistant responses. Can be overridden at each message request.

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{

Restart a conversation starting from a given entry.

Given a conversation_id and an id, recreate a conversation from this point and run completion. A new conversation is returned with the new entries returned.

path Parameters
conversation_idrequired
Request Body schema: application/json

required

required ConversationInputs (string) or (Array of InputEntries (MessageInputEntry (object) or FunctionResultEntry (object))) (ConversationInputs)
stream boolean (Stream) Default: true Value: true Whether to stream back partial progress. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON.
store boolean (Store) Default: true Whether to store the results into our servers or not.
handoff_execution string (Handoff Execution) Default: "server" Enum: "client" "server"
from_entry_idrequired string (From Entry Id)
object (CompletionArgs) Completion arguments that will be used to generate assistant responses. Can be overridden at each message request.

Responses

Request samples

Content type

application/json

`{

Response samples

Content type

application/json

`{