OpenAI APIs - Completions — SGLang (original) (raw)

OpenAI APIs - Completions#

SGLang provides OpenAI-compatible APIs to enable a smooth transition from OpenAI services to self-hosted local models. A complete reference for the API is available in the OpenAI API Reference.

This tutorial covers the following popular APIs:

Check out other tutorials to learn about vision APIs for vision-language models and embedding APIs for embedding models.

Launch A Server#

Launch the server in your terminal and wait for it to initialize.

from sglang.test.test_utils import is_in_ci

if is_in_ci(): from patch import launch_server_cmd else: from sglang.utils import launch_server_cmd

from sglang.utils import wait_for_server, print_highlight, terminate_process

server_process, port = launch_server_cmd( "python3 -m sglang.launch_server --model-path qwen/qwen2.5-0.5b-instruct --host 0.0.0.0 --mem-fraction-static 0.8" )

wait_for_server(f"http://localhost:{port}") print(f"Server started on http://localhost:{port}")

[2025-06-14 07:23:35] server_args=ServerArgs(model_path='qwen/qwen2.5-0.5b-instruct', tokenizer_path='qwen/qwen2.5-0.5b-instruct', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=None, device='cuda', served_model_name='qwen/qwen2.5-0.5b-instruct', chat_template=None, completion_template=None, is_embedding=False, enable_multimodal=None, revision=None, impl='auto', host='0.0.0.0', port=36868, mem_fraction_static=0.8, max_running_requests=200, max_total_tokens=20480, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, tp_size=1, pp_size=1, max_micro_batch_size=None, stream_interval=1, stream_output=False, random_seed=153406719, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, sleep_on_idle=False, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=False, bucket_time_to_first_token=None, bucket_e2e_request_latency=None, bucket_inter_token_latency=None, collect_tokens_histogram=False, decode_log_interval=40, enable_request_time_stats_logging=False, kv_events_config=None, api_key=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, tool_call_parser=None, dp_size=1, load_balance_method='round_robin', dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', preferred_sampling_params=None, lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend=None, sampling_backend='flashinfer', grammar_backend='xgrammar', mm_attention_backend=None, speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=None, speculative_eagle_topk=None, speculative_num_draft_tokens=None, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, ep_size=1, enable_ep_moe=False, enable_deepep_moe=False, deepep_mode='auto', ep_num_redundant_experts=0, ep_dispatch_algorithm='static', init_expert_location='trivial', enable_eplb=False, eplb_algorithm='auto', eplb_rebalance_num_iterations=1000, eplb_rebalance_layers_per_chunk=None, expert_distribution_recorder_mode=None, expert_distribution_recorder_buffer_size=1000, enable_expert_distribution_metrics=False, deepep_config=None, moe_dense_tp_size=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, cuda_graph_max_bs=None, cuda_graph_bs=None, disable_cuda_graph=True, disable_cuda_graph_padding=False, enable_profile_cuda_graph=False, enable_nccl_nvls=False, enable_tokenizer_batch_encode=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, enable_mscclpp=False, disable_overlap_schedule=False, disable_overlap_cg_plan=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_dp_lm_head=False, enable_two_batch_overlap=False, enable_torch_compile=False, torch_compile_max_bs=32, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, enable_hierarchical_cache=False, hicache_ratio=2.0, hicache_size=0, hicache_write_policy='write_through_selective', flashinfer_mla_disable_ragged=False, disable_shared_experts_fusion=False, disable_chunked_prefix_cache=False, disable_fast_image_processor=False, enable_return_hidden_states=False, warmups=None, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False, debug_tensor_dump_prefill_only=False, disaggregation_mode='null', disaggregation_transfer_backend='mooncake', disaggregation_bootstrap_port=8998, disaggregation_ib_device=None, num_reserved_decode_tokens=512, pdlb_url=None) [2025-06-14 07:23:46] Attention backend not set. Use fa3 backend by default. [2025-06-14 07:23:46] Init torch distributed begin. [2025-06-14 07:23:46] Init torch distributed ends. mem usage=0.00 GB [2025-06-14 07:23:47] Load weight begin. avail mem=60.49 GB [2025-06-14 07:23:47] Using model weights format ['*.safetensors'] [2025-06-14 07:23:48] No model.safetensors.index.json found in remote. Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 5.65it/s] Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 5.64it/s]

[2025-06-14 07:23:48] Load weight end. type=Qwen2ForCausalLM, dtype=torch.bfloat16, avail mem=59.52 GB, mem usage=0.98 GB. [2025-06-14 07:23:48] KV Cache is allocated. #tokens: 20480, K size: 0.12 GB, V size: 0.12 GB [2025-06-14 07:23:48] Memory pool end. avail mem=59.11 GB [2025-06-14 07:23:48] max_total_num_tokens=20480, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=200, context_len=32768, available_gpu_mem=59.01 GB [2025-06-14 07:23:49] INFO: Started server process [1165860] [2025-06-14 07:23:49] INFO: Waiting for application startup. [2025-06-14 07:23:49] INFO: Application startup complete. [2025-06-14 07:23:49] INFO: Uvicorn running on http://0.0.0.0:36868 (Press CTRL+C to quit) [2025-06-14 07:23:49] INFO: 127.0.0.1:60362 - "GET /v1/models HTTP/1.1" 200 OK [2025-06-14 07:23:50] INFO: 127.0.0.1:43020 - "GET /get_model_info HTTP/1.1" 200 OK [2025-06-14 07:23:50] Prefill batch. #new-seq: 1, #new-token: 6, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0 [2025-06-14 07:23:51] INFO: 127.0.0.1:43024 - "POST /generate HTTP/1.1" 200 OK [2025-06-14 07:23:51] The server is fired up and ready to roll!

NOTE: Typically, the server runs in a separate terminal. In this notebook, we run the server and notebook code together, so their outputs are combined. To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue. We are running those notebooks in a CI parallel environment, so the throughput is not representative of the actual performance.

Server started on http://localhost:36868

Chat Completions#

Usage#

The server fully implements the OpenAI API. It will automatically apply the chat template specified in the Hugging Face tokenizer, if one is available. You can also specify a custom chat template with --chat-template when launching the server.

import openai

client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

response = client.chat.completions.create( model="qwen/qwen2.5-0.5b-instruct", messages=[ {"role": "user", "content": "List 3 countries and their capitals."}, ], temperature=0, max_tokens=64, )

print_highlight(f"Response: {response}")

[2025-06-14 07:23:54] Detected chat template content format: string [2025-06-14 07:23:54] Prefill batch. #new-seq: 1, #new-token: 37, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0 [2025-06-14 07:23:54] Decode batch. #running-req: 1, #token: 70, token usage: 0.00, cuda graph: False, gen throughput (token/s): 6.58, #queue-req: 0 [2025-06-14 07:23:54] INFO: 127.0.0.1:43034 - "POST /v1/chat/completions HTTP/1.1" 200 OK

Response: ChatCompletion(id='ba754e2e55244872a4c4550bb88393ae', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Sure, here are three countries and their respective capitals:\n\n1. **United States** - Washington, D.C.\n2. **Canada** - Ottawa\n3. **Australia** - Canberra', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None, reasoning_content=None), matched_stop=151645)], created=1749885834, model='qwen/qwen2.5-0.5b-instruct', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=39, prompt_tokens=37, total_tokens=76, completion_tokens_details=None, prompt_tokens_details=None))

Parameters#

The chat completions API accepts OpenAI Chat Completions API’s parameters. Refer to OpenAI Chat Completions API for more details.

SGLang extends the standard API with the extra_body parameter, allowing for additional customization. One key option within extra_body is chat_template_kwargs, which can be used to pass arguments to the chat template processor.

Enabling Model Thinking/Reasoning#

You can use chat_template_kwargs to enable or disable the model’s internal thinking or reasoning process output. Set "enable_thinking": True within chat_template_kwargs to include the reasoning steps in the response. This requires launching the server with a compatible reasoning parser (e.g., --reasoning-parser qwen3 for Qwen3 models).

Here’s an example demonstrating how to enable thinking and retrieve the reasoning content separately (using separate_reasoning: True):

Ensure the server is launched with a compatible reasoning parser, e.g.:

python3 -m sglang.launch_server --model-path QwQ/Qwen3-32B-250415 --reasoning-parser qwen3 ...

from openai import OpenAI

Modify OpenAI's API key and API base to use SGLang's API server.

openai_api_key = "EMPTY" openai_api_base = f"http://127.0.0.1:{port}/v1" # Use the correct port

client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, )

model = "QwQ/Qwen3-32B-250415" # Use the model loaded by the server messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]

response = client.chat.completions.create( model=model, messages=messages, extra_body={ "chat_template_kwargs": {"enable_thinking": True}, "separate_reasoning": True } )

print("response.choices[0].message.reasoning_content: \n", response.choices[0].message.reasoning_content) print("response.choices[0].message.content: \n", response.choices[0].message.content)

Example Output:

response.choices[0].message.reasoning_content: Okay, so I need to figure out which number is greater between 9.11 and 9.8. Hmm, let me think. Both numbers start with 9, right? So the whole number part is the same. That means I need to look at the decimal parts to determine which one is bigger. ... Therefore, after checking multiple methods—aligning decimals, subtracting, converting to fractions, and using a real-world analogy—it's clear that 9.8 is greater than 9.11.

response.choices[0].message.content: To determine which number is greater between 9.11 and 9.8, follow these steps: ... Answer: 9.8 is greater than 9.11.

Setting "enable_thinking": False (or omitting it) will result in reasoning_content being None.

Here is an example of a detailed chat completion request using standard OpenAI parameters:

response = client.chat.completions.create( model="qwen/qwen2.5-0.5b-instruct", messages=[ { "role": "system", "content": "You are a knowledgeable historian who provides concise responses.", }, {"role": "user", "content": "Tell me about ancient Rome"}, { "role": "assistant", "content": "Ancient Rome was a civilization centered in Italy.", }, {"role": "user", "content": "What were their major achievements?"}, ], temperature=0.3, # Lower temperature for more focused responses max_tokens=128, # Reasonable length for a concise response top_p=0.95, # Slightly higher for better fluency presence_penalty=0.2, # Mild penalty to avoid repetition frequency_penalty=0.2, # Mild penalty for more natural language n=1, # Single response is usually more stable seed=42, # Keep for reproducibility )

print_highlight(response.choices[0].message.content)

[2025-06-14 07:23:54] Prefill batch. #new-seq: 1, #new-token: 49, #cached-token: 5, token usage: 0.00, #running-req: 0, #queue-req: 0 [2025-06-14 07:23:55] Decode batch. #running-req: 1, #token: 88, token usage: 0.00, cuda graph: False, gen throughput (token/s): 110.13, #queue-req: 0 [2025-06-14 07:23:55] Decode batch. #running-req: 1, #token: 128, token usage: 0.01, cuda graph: False, gen throughput (token/s): 129.47, #queue-req: 0 [2025-06-14 07:23:55] Decode batch. #running-req: 1, #token: 168, token usage: 0.01, cuda graph: False, gen throughput (token/s): 129.60, #queue-req: 0 [2025-06-14 07:23:55] INFO: 127.0.0.1:43034 - "POST /v1/chat/completions HTTP/1.1" 200 OK

Ancient Rome was a major civilization that thrived from the 8th century BC to the 5th century AD. It was a vast and diverse society, with a complex political structure, extensive trade networks, and a rich cultural heritage. Some of the major achievements of ancient Rome include:

1. **Founding of Rome**: The city of Rome was founded in 753 BC by Romulus, a tribal leader who became the first king of Rome.

2. **Roman Republic**: The Roman Republic was established in 509 BC under Augustus, who became the first Roman emperor.

3. **Roman Empire**:

Streaming mode is also supported.

stream = client.chat.completions.create( model="qwen/qwen2.5-0.5b-instruct", messages=[{"role": "user", "content": "Say this is a test"}], stream=True, ) for chunk in stream: if chunk.choices[0].delta.content is not None: print(chunk.choices[0].delta.content, end="")

[2025-06-14 07:23:55] INFO: 127.0.0.1:43034 - "POST /v1/chat/completions HTTP/1.1" 200 OK [2025-06-14 07:23:55] Prefill batch. #new-seq: 1, #new-token: 10, #cached-token: 24, token usage: 0.00, #running-req: 0, #queue-req: 0 Yes, that is a test. I'm here to assist you with any information or questions you might have. Do you have[2025-06-14 07:23:56] Decode batch. #running-req: 1, #token: 60, token usage: 0.00, cuda graph: False, gen throughput (token/s): 118.50, #queue-req: 0 any specific questions or topics you'd like to discuss?

Completions#

Usage#

Completions API is similar to Chat Completions API, but without the messages parameter or chat templates.

response = client.completions.create( model="qwen/qwen2.5-0.5b-instruct", prompt="List 3 countries and their capitals.", temperature=0, max_tokens=64, n=1, stop=None, )

print_highlight(f"Response: {response}")

[2025-06-14 07:23:56] Prefill batch. #new-seq: 1, #new-token: 8, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0 [2025-06-14 07:23:56] Decode batch. #running-req: 1, #token: 37, token usage: 0.00, cuda graph: False, gen throughput (token/s): 110.48, #queue-req: 0 [2025-06-14 07:23:56] INFO: 127.0.0.1:43034 - "POST /v1/completions HTTP/1.1" 200 OK

Response: Completion(id='ae69b76bc806431491ca359d140f7407', choices=[CompletionChoice(finish_reason='length', index=0, logprobs=None, text=' 1. United States - Washington D.C.\n2. Canada - Ottawa\n3. France - Paris\n4. Germany - Berlin\n5. Japan - Tokyo\n6. Italy - Rome\n7. Spain - Madrid\n8. United Kingdom - London\n9. Australia - Canberra\n10. New Zealand', matched_stop=None)], created=1749885836, model='qwen/qwen2.5-0.5b-instruct', object='text_completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=64, prompt_tokens=8, total_tokens=72, completion_tokens_details=None, prompt_tokens_details=None))

Parameters#

The completions API accepts OpenAI Completions API’s parameters. Refer to OpenAI Completions API for more details.

Here is an example of a detailed completions request:

response = client.completions.create( model="qwen/qwen2.5-0.5b-instruct", prompt="Write a short story about a space explorer.", temperature=0.7, # Moderate temperature for creative writing max_tokens=150, # Longer response for a story top_p=0.9, # Balanced diversity in word choice stop=["\n\n", "THE END"], # Multiple stop sequences presence_penalty=0.3, # Encourage novel elements frequency_penalty=0.3, # Reduce repetitive phrases n=1, # Generate one completion seed=123, # For reproducible results )

print_highlight(f"Response: {response}")

[2025-06-14 07:23:56] Prefill batch. #new-seq: 1, #new-token: 9, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0 [2025-06-14 07:23:56] Decode batch. #running-req: 1, #token: 14, token usage: 0.00, cuda graph: False, gen throughput (token/s): 122.75, #queue-req: 0 [2025-06-14 07:23:57] Decode batch. #running-req: 1, #token: 54, token usage: 0.00, cuda graph: False, gen throughput (token/s): 128.16, #queue-req: 0 [2025-06-14 07:23:57] INFO: 127.0.0.1:43034 - "POST /v1/completions HTTP/1.1" 200 OK

Response: Completion(id='23ea55c598ef40beb0ebdd2d60d88ee1', choices=[CompletionChoice(finish_reason='stop', index=0, logprobs=None, text=' Once upon a time, there was a space explorer named Alex who had always dreamed of exploring the stars. He had studied astronomy and had a passion for the universe, and he knew that it would be his destiny to travel to the farthest reaches of space.', matched_stop='\n\n')], created=1749885836, model='qwen/qwen2.5-0.5b-instruct', object='text_completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=52, prompt_tokens=9, total_tokens=61, completion_tokens_details=None, prompt_tokens_details=None))

Structured Outputs (JSON, Regex, EBNF)#

For OpenAI compatible structured outputs API, refer to Structured Outputs for more details.

Batches#

Batches API for chat completions and completions are also supported. You can upload your requests in jsonl files, create a batch job, and retrieve the results when the batch job is completed (which takes longer but costs less).

The batches APIs are:

Here is an example of a batch job for chat completions, completions are similar.

import json import time from openai import OpenAI

client = OpenAI(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

requests = [ { "custom_id": "request-1", "method": "POST", "url": "/chat/completions", "body": { "model": "qwen/qwen2.5-0.5b-instruct", "messages": [ {"role": "user", "content": "Tell me a joke about programming"} ], "max_tokens": 50, }, }, { "custom_id": "request-2", "method": "POST", "url": "/chat/completions", "body": { "model": "qwen/qwen2.5-0.5b-instruct", "messages": [{"role": "user", "content": "What is Python?"}], "max_tokens": 50, }, }, ]

input_file_path = "batch_requests.jsonl"

with open(input_file_path, "w") as f: for req in requests: f.write(json.dumps(req) + "\n")

with open(input_file_path, "rb") as f: file_response = client.files.create(file=f, purpose="batch")

batch_response = client.batches.create( input_file_id=file_response.id, endpoint="/v1/chat/completions", completion_window="24h", )

print_highlight(f"Batch job created with ID: {batch_response.id}")

[2025-06-14 07:23:57] INFO: 127.0.0.1:43038 - "POST /v1/files HTTP/1.1" 200 OK [2025-06-14 07:23:57] INFO: 127.0.0.1:43038 - "POST /v1/batches HTTP/1.1" 200 OK

Batch job created with ID: batch_371443ce-cf27-4c93-bb01-581df2e31817

[2025-06-14 07:23:57] Prefill batch. #new-seq: 2, #new-token: 20, #cached-token: 48, token usage: 0.00, #running-req: 0, #queue-req: 0

while batch_response.status not in ["completed", "failed", "cancelled"]: time.sleep(3) print(f"Batch job status: {batch_response.status}...trying again in 3 seconds...") batch_response = client.batches.retrieve(batch_response.id)

if batch_response.status == "completed": print("Batch job completed successfully!") print(f"Request counts: {batch_response.request_counts}")

result_file_id = batch_response.output_file_id
file_response = client.files.content(result_file_id)
result_content = file_response.read().decode("utf-8")

results = [
    json.loads(line) for line in result_content.split("\n") if line.strip() != ""
]

for result in results:
    print_highlight(f"Request {result['custom_id']}:")
    print_highlight(f"Response: {result['response']}")

print_highlight("Cleaning up files...")
# Only delete the result file ID since file_response is just content
client.files.delete(result_file_id)

else: print_highlight(f"Batch job failed with status: {batch_response.status}") if hasattr(batch_response, "errors"): print_highlight(f"Errors: {batch_response.errors}")

[2025-06-14 07:23:57] Decode batch. #running-req: 1, #token: 66, token usage: 0.00, cuda graph: False, gen throughput (token/s): 112.25, #queue-req: 0 Batch job status: validating...trying again in 3 seconds... [2025-06-14 07:24:00] INFO: 127.0.0.1:43038 - "GET /v1/batches/batch_371443ce-cf27-4c93-bb01-581df2e31817 HTTP/1.1" 200 OK Batch job completed successfully! Request counts: BatchRequestCounts(completed=2, failed=0, total=2) [2025-06-14 07:24:00] INFO: 127.0.0.1:43038 - "GET /v1/files/backend_result_file-8cfc659b-269d-48de-99f9-502980b835df/content HTTP/1.1" 200 OK

Response: {'status_code': 200, 'request_id': 'batch_371443ce-cf27-4c93-bb01-581df2e31817-req_0', 'body': {'id': 'batch_371443ce-cf27-4c93-bb01-581df2e31817-req_0', 'object': 'chat.completion', 'created': 1749885837, 'model': 'qwen/qwen2.5-0.5b-instruct', 'choices': {'index': 0, 'message': {'role': 'assistant', 'content': "Sure! Here's a programming joke I came up with:\n\nWhy did the programmer break up with the compiler?\n\nBecause it was a terrible language!", 'tool_calls': None, 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'stop', 'matched_stop': 151645}, 'usage': {'prompt_tokens': 35, 'completion_tokens': 30, 'total_tokens': 65}, 'system_fingerprint': None}}

Response: {'status_code': 200, 'request_id': 'batch_371443ce-cf27-4c93-bb01-581df2e31817-req_1', 'body': {'id': 'batch_371443ce-cf27-4c93-bb01-581df2e31817-req_1', 'object': 'chat.completion', 'created': 1749885837, 'model': 'qwen/qwen2.5-0.5b-instruct', 'choices': {'index': 0, 'message': {'role': 'assistant', 'content': 'Python is a high-level, interpreted programming language developed by Guido van Rossum. It is designed to be easy to read and write, with a syntax that is similar to other popular high-level languages like C++ and Java. Python is not limited', 'tool_calls': None, 'reasoning_content': None}, 'logprobs': None, 'finish_reason': 'length', 'matched_stop': None}, 'usage': {'prompt_tokens': 33, 'completion_tokens': 50, 'total_tokens': 83}, 'system_fingerprint': None}}

[2025-06-14 07:24:00] INFO: 127.0.0.1:43038 - "DELETE /v1/files/backend_result_file-8cfc659b-269d-48de-99f9-502980b835df HTTP/1.1" 200 OK

It takes a while to complete the batch job. You can use these two APIs to retrieve the batch job status or cancel the batch job.

  1. batches/{batch_id}: Retrieve the batch job status.
  2. batches/{batch_id}/cancel: Cancel the batch job.

Here is an example to check the batch job status.

import json import time from openai import OpenAI

client = OpenAI(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

requests = [] for i in range(20): requests.append( { "custom_id": f"request-{i}", "method": "POST", "url": "/chat/completions", "body": { "model": "qwen/qwen2.5-0.5b-instruct", "messages": [ { "role": "system", "content": f"{i}: You are a helpful AI assistant", }, { "role": "user", "content": "Write a detailed story about topic. Make it very long.", }, ], "max_tokens": 64, }, } )

input_file_path = "batch_requests.jsonl" with open(input_file_path, "w") as f: for req in requests: f.write(json.dumps(req) + "\n")

with open(input_file_path, "rb") as f: uploaded_file = client.files.create(file=f, purpose="batch")

batch_job = client.batches.create( input_file_id=uploaded_file.id, endpoint="/v1/chat/completions", completion_window="24h", )

print_highlight(f"Created batch job with ID: {batch_job.id}") print_highlight(f"Initial status: {batch_job.status}")

time.sleep(10)

max_checks = 5 for i in range(max_checks): batch_details = client.batches.retrieve(batch_id=batch_job.id)

print_highlight(
    f"Batch job details (check {i+1} / {max_checks}) // ID: {batch_details.id} // Status: {batch_details.status} // Created at: {batch_details.created_at} // Input file ID: {batch_details.input_file_id} // Output file ID: {batch_details.output_file_id}"
)
print_highlight(
    f"<strong>Request counts: Total: {batch_details.request_counts.total} // Completed: {batch_details.request_counts.completed} // Failed: {batch_details.request_counts.failed}</strong>"
)

time.sleep(3)

[2025-06-14 07:24:00] INFO: 127.0.0.1:60482 - "POST /v1/files HTTP/1.1" 200 OK [2025-06-14 07:24:00] INFO: 127.0.0.1:60482 - "POST /v1/batches HTTP/1.1" 200 OK

Created batch job with ID: batch_6e8f3351-1098-4e7b-a632-ac5640749d76

Initial status: validating

[2025-06-14 07:24:00] Prefill batch. #new-seq: 4, #new-token: 120, #cached-token: 12, token usage: 0.00, #running-req: 0, #queue-req: 0 [2025-06-14 07:24:00] Prefill batch. #new-seq: 16, #new-token: 490, #cached-token: 48, token usage: 0.01, #running-req: 4, #queue-req: 0 [2025-06-14 07:24:00] Decode batch. #running-req: 20, #token: 1063, token usage: 0.05, cuda graph: False, gen throughput (token/s): 156.59, #queue-req: 0 [2025-06-14 07:24:01] Decode batch. #running-req: 20, #token: 1863, token usage: 0.09, cuda graph: False, gen throughput (token/s): 2205.97, #queue-req: 0 [2025-06-14 07:24:10] INFO: 127.0.0.1:34494 - "GET /v1/batches/batch_6e8f3351-1098-4e7b-a632-ac5640749d76 HTTP/1.1" 200 OK

Batch job details (check 1 / 5) // ID: batch_6e8f3351-1098-4e7b-a632-ac5640749d76 // Status: completed // Created at: 1749885840 // Input file ID: backend_input_file-732e16c1-b7d1-4cd9-88de-109c12e3a0b3 // Output file ID: backend_result_file-0efded51-0244-4278-bd7f-981d4976c66e

**Request counts: Total: 20 // Completed: 20 // Failed: 0

[2025-06-14 07:24:13] INFO: 127.0.0.1:34494 - "GET /v1/batches/batch_6e8f3351-1098-4e7b-a632-ac5640749d76 HTTP/1.1" 200 OK

Batch job details (check 2 / 5) // ID: batch_6e8f3351-1098-4e7b-a632-ac5640749d76 // Status: completed // Created at: 1749885840 // Input file ID: backend_input_file-732e16c1-b7d1-4cd9-88de-109c12e3a0b3 // Output file ID: backend_result_file-0efded51-0244-4278-bd7f-981d4976c66e

**Request counts: Total: 20 // Completed: 20 // Failed: 0

[2025-06-14 07:24:16] INFO: 127.0.0.1:34494 - "GET /v1/batches/batch_6e8f3351-1098-4e7b-a632-ac5640749d76 HTTP/1.1" 200 OK

Batch job details (check 3 / 5) // ID: batch_6e8f3351-1098-4e7b-a632-ac5640749d76 // Status: completed // Created at: 1749885840 // Input file ID: backend_input_file-732e16c1-b7d1-4cd9-88de-109c12e3a0b3 // Output file ID: backend_result_file-0efded51-0244-4278-bd7f-981d4976c66e

**Request counts: Total: 20 // Completed: 20 // Failed: 0

[2025-06-14 07:24:19] INFO: 127.0.0.1:34494 - "GET /v1/batches/batch_6e8f3351-1098-4e7b-a632-ac5640749d76 HTTP/1.1" 200 OK

Batch job details (check 4 / 5) // ID: batch_6e8f3351-1098-4e7b-a632-ac5640749d76 // Status: completed // Created at: 1749885840 // Input file ID: backend_input_file-732e16c1-b7d1-4cd9-88de-109c12e3a0b3 // Output file ID: backend_result_file-0efded51-0244-4278-bd7f-981d4976c66e

**Request counts: Total: 20 // Completed: 20 // Failed: 0

[2025-06-14 07:24:22] INFO: 127.0.0.1:34494 - "GET /v1/batches/batch_6e8f3351-1098-4e7b-a632-ac5640749d76 HTTP/1.1" 200 OK

Batch job details (check 5 / 5) // ID: batch_6e8f3351-1098-4e7b-a632-ac5640749d76 // Status: completed // Created at: 1749885840 // Input file ID: backend_input_file-732e16c1-b7d1-4cd9-88de-109c12e3a0b3 // Output file ID: backend_result_file-0efded51-0244-4278-bd7f-981d4976c66e

**Request counts: Total: 20 // Completed: 20 // Failed: 0

Here is an example to cancel a batch job.

import json import time from openai import OpenAI import os

client = OpenAI(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")

requests = [] for i in range(5000): requests.append( { "custom_id": f"request-{i}", "method": "POST", "url": "/chat/completions", "body": { "model": "qwen/qwen2.5-0.5b-instruct", "messages": [ { "role": "system", "content": f"{i}: You are a helpful AI assistant", }, { "role": "user", "content": "Write a detailed story about topic. Make it very long.", }, ], "max_tokens": 128, }, } )

input_file_path = "batch_requests.jsonl" with open(input_file_path, "w") as f: for req in requests: f.write(json.dumps(req) + "\n")

with open(input_file_path, "rb") as f: uploaded_file = client.files.create(file=f, purpose="batch")

batch_job = client.batches.create( input_file_id=uploaded_file.id, endpoint="/v1/chat/completions", completion_window="24h", )

print_highlight(f"Created batch job with ID: {batch_job.id}") print_highlight(f"Initial status: {batch_job.status}")

time.sleep(10)

try: cancelled_job = client.batches.cancel(batch_id=batch_job.id) print_highlight(f"Cancellation initiated. Status: {cancelled_job.status}") assert cancelled_job.status == "cancelling"

# Monitor the cancellation process
while cancelled_job.status not in ["failed", "cancelled"]:
    time.sleep(3)
    cancelled_job = client.batches.retrieve(batch_job.id)
    print_highlight(f"Current status: {cancelled_job.status}")

# Verify final status
assert cancelled_job.status == "cancelled"
print_highlight("Batch job successfully cancelled")

except Exception as e: print_highlight(f"Error during cancellation: {e}") raise e

finally: try: del_response = client.files.delete(uploaded_file.id) if del_response.deleted: print_highlight("Successfully cleaned up input file") if os.path.exists(input_file_path): os.remove(input_file_path) print_highlight("Successfully deleted local batch_requests.jsonl file") except Exception as e: print_highlight(f"Error cleaning up: {e}") raise e

[2025-06-14 07:24:25] INFO: 127.0.0.1:39698 - "POST /v1/files HTTP/1.1" 200 OK [2025-06-14 07:24:25] INFO: 127.0.0.1:39698 - "POST /v1/batches HTTP/1.1" 200 OK

Created batch job with ID: batch_f9e32291-40f8-47c7-8e4d-e21b0051c2f5

Initial status: validating

[2025-06-14 07:24:26] Prefill batch. #new-seq: 32, #new-token: 380, #cached-token: 698, token usage: 0.03, #running-req: 0, #queue-req: 0 [2025-06-14 07:24:26] Prefill batch. #new-seq: 105, #new-token: 3150, #cached-token: 457, token usage: 0.05, #running-req: 32, #queue-req: 422 [2025-06-14 07:24:27] Prefill batch. #new-seq: 25, #new-token: 750, #cached-token: 125, token usage: 0.28, #running-req: 135, #queue-req: 4838 [2025-06-14 07:24:27] Prefill batch. #new-seq: 4, #new-token: 120, #cached-token: 20, token usage: 0.42, #running-req: 159, #queue-req: 4834 [2025-06-14 07:24:27] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.43, #running-req: 162, #queue-req: 4832 [2025-06-14 07:24:27] Decode batch. #running-req: 163, #token: 10780, token usage: 0.53, cuda graph: False, gen throughput (token/s): 222.91, #queue-req: 4832 [2025-06-14 07:24:27] Decode batch. #running-req: 161, #token: 17095, token usage: 0.83, cuda graph: False, gen throughput (token/s): 18088.19, #queue-req: 4832 [2025-06-14 07:24:27] KV cache pool is full. Retract requests. #retracted_reqs: 25, #new_token_ratio: 0.5997 -> 0.9345 [2025-06-14 07:24:28] Decode batch. #running-req: 136, #token: 20215, token usage: 0.99, cuda graph: False, gen throughput (token/s): 16484.93, #queue-req: 4857 [2025-06-14 07:24:28] KV cache pool is full. Retract requests. #retracted_reqs: 16, #new_token_ratio: 0.9154 -> 1.0000 [2025-06-14 07:24:28] Prefill batch. #new-seq: 9, #new-token: 270, #cached-token: 45, token usage: 0.88, #running-req: 120, #queue-req: 4864 [2025-06-14 07:24:28] Prefill batch. #new-seq: 120, #new-token: 3610, #cached-token: 590, token usage: 0.02, #running-req: 9, #queue-req: 4744 [2025-06-14 07:24:28] Prefill batch. #new-seq: 4, #new-token: 120, #cached-token: 20, token usage: 0.27, #running-req: 127, #queue-req: 4740 [2025-06-14 07:24:28] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.28, #running-req: 130, #queue-req: 4739 [2025-06-14 07:24:28] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.36, #running-req: 130, #queue-req: 4737 [2025-06-14 07:24:28] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.36, #running-req: 131, #queue-req: 4736 [2025-06-14 07:24:28] Decode batch. #running-req: 132, #token: 7868, token usage: 0.38, cuda graph: False, gen throughput (token/s): 9866.70, #queue-req: 4736 [2025-06-14 07:24:28] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.43, #running-req: 131, #queue-req: 4735 [2025-06-14 07:24:28] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.45, #running-req: 131, #queue-req: 4733 [2025-06-14 07:24:28] Decode batch. #running-req: 133, #token: 13121, token usage: 0.64, cuda graph: False, gen throughput (token/s): 14035.73, #queue-req: 4733 [2025-06-14 07:24:29] Decode batch. #running-req: 133, #token: 18441, token usage: 0.90, cuda graph: False, gen throughput (token/s): 15139.98, #queue-req: 4733 [2025-06-14 07:24:29] Prefill batch. #new-seq: 5, #new-token: 153, #cached-token: 22, token usage: 0.91, #running-req: 125, #queue-req: 4728 [2025-06-14 07:24:29] Prefill batch. #new-seq: 113, #new-token: 3522, #cached-token: 433, token usage: 0.08, #running-req: 16, #queue-req: 4615 [2025-06-14 07:24:29] Prefill batch. #new-seq: 21, #new-token: 647, #cached-token: 88, token usage: 0.29, #running-req: 122, #queue-req: 4594 [2025-06-14 07:24:29] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.32, #running-req: 142, #queue-req: 4592 [2025-06-14 07:24:29] Decode batch. #running-req: 144, #token: 7679, token usage: 0.37, cuda graph: False, gen throughput (token/s): 10480.31, #queue-req: 4592 [2025-06-14 07:24:29] Prefill batch. #new-seq: 3, #new-token: 92, #cached-token: 13, token usage: 0.39, #running-req: 142, #queue-req: 4589 [2025-06-14 07:24:29] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.44, #running-req: 143, #queue-req: 4588 [2025-06-14 07:24:30] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.46, #running-req: 143, #queue-req: 4587 [2025-06-14 07:24:30] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.47, #running-req: 141, #queue-req: 4585 [2025-06-14 07:24:30] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.49, #running-req: 142, #queue-req: 4584 [2025-06-14 07:24:30] Decode batch. #running-req: 143, #token: 12498, token usage: 0.61, cuda graph: False, gen throughput (token/s): 13208.90, #queue-req: 4584 [2025-06-14 07:24:30] Decode batch. #running-req: 142, #token: 18000, token usage: 0.88, cuda graph: False, gen throughput (token/s): 16120.25, #queue-req: 4584 [2025-06-14 07:24:30] KV cache pool is full. Retract requests. #retracted_reqs: 19, #new_token_ratio: 0.7441 -> 1.0000 [2025-06-14 07:24:30] Prefill batch. #new-seq: 7, #new-token: 217, #cached-token: 28, token usage: 0.88, #running-req: 122, #queue-req: 4596 [2025-06-14 07:24:30] Prefill batch. #new-seq: 5, #new-token: 150, #cached-token: 25, token usage: 0.86, #running-req: 124, #queue-req: 4591 [2025-06-14 07:24:30] Prefill batch. #new-seq: 105, #new-token: 3328, #cached-token: 347, token usage: 0.10, #running-req: 23, #queue-req: 4486 [2025-06-14 07:24:31] Decode batch. #running-req: 128, #token: 5017, token usage: 0.24, cuda graph: False, gen throughput (token/s): 11221.23, #queue-req: 4486 [2025-06-14 07:24:31] Prefill batch. #new-seq: 15, #new-token: 459, #cached-token: 66, token usage: 0.25, #running-req: 116, #queue-req: 4471 [2025-06-14 07:24:31] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.28, #running-req: 130, #queue-req: 4470 [2025-06-14 07:24:31] Prefill batch. #new-seq: 2, #new-token: 62, #cached-token: 8, token usage: 0.35, #running-req: 130, #queue-req: 4468 [2025-06-14 07:24:31] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.40, #running-req: 131, #queue-req: 4467 [2025-06-14 07:24:31] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.42, #running-req: 131, #queue-req: 4465 [2025-06-14 07:24:31] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.42, #running-req: 132, #queue-req: 4464 [2025-06-14 07:24:31] Decode batch. #running-req: 133, #token: 10651, token usage: 0.52, cuda graph: False, gen throughput (token/s): 11758.79, #queue-req: 4464 [2025-06-14 07:24:31] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.58, #running-req: 132, #queue-req: 4463 [2025-06-14 07:24:31] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.74, #running-req: 132, #queue-req: 4462 [2025-06-14 07:24:31] Decode batch. #running-req: 133, #token: 15835, token usage: 0.77, cuda graph: False, gen throughput (token/s): 14392.67, #queue-req: 4462 [2025-06-14 07:24:32] Prefill batch. #new-seq: 9, #new-token: 284, #cached-token: 31, token usage: 0.86, #running-req: 126, #queue-req: 4453 [2025-06-14 07:24:32] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.88, #running-req: 130, #queue-req: 4452 [2025-06-14 07:24:32] Prefill batch. #new-seq: 98, #new-token: 3042, #cached-token: 388, token usage: 0.17, #running-req: 32, #queue-req: 4354 [2025-06-14 07:24:32] Decode batch. #running-req: 130, #token: 6721, token usage: 0.33, cuda graph: False, gen throughput (token/s): 11788.77, #queue-req: 4354 [2025-06-14 07:24:32] Prefill batch. #new-seq: 25, #new-token: 771, #cached-token: 104, token usage: 0.28, #running-req: 129, #queue-req: 4329 [2025-06-14 07:24:32] Prefill batch. #new-seq: 5, #new-token: 150, #cached-token: 25, token usage: 0.32, #running-req: 139, #queue-req: 4324 [2025-06-14 07:24:32] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.40, #running-req: 142, #queue-req: 4322 [2025-06-14 07:24:32] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.44, #running-req: 143, #queue-req: 4321 [2025-06-14 07:24:32] Prefill batch. #new-seq: 3, #new-token: 92, #cached-token: 13, token usage: 0.44, #running-req: 142, #queue-req: 4318 [2025-06-14 07:24:32] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.45, #running-req: 143, #queue-req: 4317 [2025-06-14 07:24:32] Decode batch. #running-req: 144, #token: 10147, token usage: 0.50, cuda graph: False, gen throughput (token/s): 12905.27, #queue-req: 4317 [2025-06-14 07:24:33] Decode batch. #running-req: 141, #token: 15550, token usage: 0.76, cuda graph: False, gen throughput (token/s): 15947.05, #queue-req: 4317 [2025-06-14 07:24:33] Prefill batch. #new-seq: 2, #new-token: 60, #cached-token: 10, token usage: 0.92, #running-req: 131, #queue-req: 4315 [2025-06-14 07:24:33] Decode batch. #running-req: 132, #token: 19432, token usage: 0.95, cuda graph: False, gen throughput (token/s): 14908.48, #queue-req: 4315 [2025-06-14 07:24:33] Prefill batch. #new-seq: 93, #new-token: 2916, #cached-token: 339, token usage: 0.25, #running-req: 37, #queue-req: 4222 [2025-06-14 07:24:33] Prefill batch. #new-seq: 47, #new-token: 1455, #cached-token: 190, token usage: 0.24, #running-req: 105, #queue-req: 4175 [2025-06-14 07:24:33] Prefill batch. #new-seq: 11, #new-token: 336, #cached-token: 49, token usage: 0.31, #running-req: 148, #queue-req: 4164 [2025-06-14 07:24:33] Prefill batch. #new-seq: 3, #new-token: 90, #cached-token: 15, token usage: 0.41, #running-req: 157, #queue-req: 4161 [2025-06-14 07:24:33] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 5, token usage: 0.43, #running-req: 159, #queue-req: 4160 [2025-06-14 07:24:34] Decode batch. #running-req: 159, #token: 9695, token usage: 0.47, cuda graph: False, gen throughput (token/s): 11025.26, #queue-req: 4160 [2025-06-14 07:24:34] Prefill batch. #new-seq: 2, #new-token: 64, #cached-token: 6, token usage: 0.47, #running-req: 157, #queue-req: 4158 [2025-06-14 07:24:34] Decode batch. #running-req: 156, #token: 15419, token usage: 0.75, cuda graph: False, gen throughput (token/s): 16050.41, #queue-req: 4158 [2025-06-14 07:24:34] KV cache pool is full. Retract requests. #retracted_reqs: 22, #new_token_ratio: 0.6248 -> 0.9583 [2025-06-14 07:24:34] Decode batch. #running-req: 134, #token: 18867, token usage: 0.92, cuda graph: False, gen throughput (token/s): 16357.89, #queue-req: 4180 [2025-06-14 07:24:34] Prefill batch. #new-seq: 89, #new-token: 2848, #cached-token: 267, token usage: 0.28, #running-req: 39, #queue-req: 4091 [2025-06-14 07:24:35] Prefill batch. #new-seq: 46, #new-token: 1491, #cached-token: 156, token usage: 0.18, #running-req: 88, #queue-req: 4045 [2025-06-14 07:24:35] Prefill batch. #new-seq: 7, #new-token: 214, #cached-token: 38, token usage: 0.25, #running-req: 130, #queue-req: 4038 [2025-06-14 07:24:35] Decode batch. #running-req: 137, #token: 6913, token usage: 0.34, cuda graph: False, gen throughput (token/s): 11255.95, #queue-req: 4038 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 6, token usage: 0.40, #running-req: 136, #queue-req: 4037 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 6, token usage: 0.42, #running-req: 136, #queue-req: 4036 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 6, token usage: 0.42, #running-req: 136, #queue-req: 4035 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 6, token usage: 0.43, #running-req: 136, #queue-req: 4034 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 6, token usage: 0.52, #running-req: 136, #queue-req: 4033 [2025-06-14 07:24:35] Decode batch. #running-req: 137, #token: 12105, token usage: 0.59, cuda graph: False, gen throughput (token/s): 13749.11, #queue-req: 4033 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 30, #cached-token: 6, token usage: 0.59, #running-req: 136, #queue-req: 4032 [2025-06-14 07:24:35] Prefill batch. #new-seq: 1, #new-token: 31, #cached-token: 5, token usage: 0.60, #running-req: 136, #queue-req: 4031 [2025-06-14 07:24:35] INFO: 127.0.0.1:57706 - "POST /v1/batches/batch_f9e32291-40f8-47c7-8e4d-e21b0051c2f5/cancel HTTP/1.1" 200 OK

Cancellation initiated. Status: cancelling

[2025-06-14 07:24:38] INFO: 127.0.0.1:57706 - "GET /v1/batches/batch_f9e32291-40f8-47c7-8e4d-e21b0051c2f5 HTTP/1.1" 200 OK

Current status: cancelled

Batch job successfully cancelled

[2025-06-14 07:24:38] INFO: 127.0.0.1:57706 - "DELETE /v1/files/backend_input_file-188cb45d-4292-4cdc-988d-c9ba5a05db6d HTTP/1.1" 200 OK

Successfully cleaned up input file

Successfully deleted local batch_requests.jsonl file

terminate_process(server_process)

[2025-06-14 07:24:38] Child process unexpectedly failed with exitcode=9. pid=1166241