GitHub - ai-dynamo/dynamo: A Datacenter Scale Distributed Inference Serving Framework (original) (raw)
NVIDIA Dynamo
| Roadmap | Support Matrix | Guides | Architecture and Features | APIs | SDK |
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
- Disaggregated prefill & decode inference – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- Dynamic GPU scheduling – Optimizes performance based on fluctuating demand
- LLM-aware request routing – Eliminates unnecessary KV cache re-computation
- Accelerated data transfer – Reduces inference response time using NIXL.
- KV cache offloading – Leverages multiple memory hierarchies for higher system throughput
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
Installation
The following examples require a few system level packages. Recommended to use Ubuntu 24.04 with a x86_64 CPU. See support_matrix.md
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate
pip install ai-dynamo[all]
Note
To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.
Building the Dynamo Base Image
Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
- Docker Hub (docker.io)
- NVIDIA NGC Container Registry (nvcr.io)
- Any private registry
Here's how to build it:
./container/build.sh docker tag dynamo:latest-vllm /dynamo-base:latest-vllm docker login docker push /dynamo-base:latest-vllm
Notes about builds for specific frameworks:
- For specific details on the
--framework vllm
build, see here. - For specific details on the
--framework tensorrtllm
build, see here.
After building, you can use this image by setting the DYNAMO_IMAGE
environment variable to point to your built image:
export DYNAMO_IMAGE=/dynamo-base:latest-vllm
Note
We are working on leaner base images that can be built using the targets in the top-level Earthfile.
Running and Interacting with an LLM Locally
To run a model and interact with it locally you can call dynamo run
with a hugging face model. dynamo run
supports several backends including: mistralrs
, sglang
, vllm
, and tensorrtllm
.
Example Command
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
LLM Serving
Dynamo provides a simple way to spin up a local set of inference components including:
- OpenAI Compatible Frontend – High performance OpenAI compatible http api server written in Rust.
- Basic and Kv Aware Router – Route and load balance traffic to a set of workers.
- Workers – Set of pre-configured LLM serving engines.
To run a minimal configuration you can use a pre-configured example.
Start Dynamo Distributed Runtime Services
First start the Dynamo Distributed Runtime services:
docker compose -f deploy/docker-compose.yml up -d
Start Dynamo LLM Serving Components
Next serve a minimal configuration with an http server, basic round-robin router, and a single worker.
cd examples/llm dynamo serve graphs.agg:Frontend -f configs/agg.yaml
Send a Request
curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "messages": [ { "role": "user", "content": "Hello, how are you?" } ], "stream":false, "max_tokens": 300 }' | jq
Local Development
If you use vscode or cursor, we have a .devcontainer folder built on Microsofts Extension. For instructions see the ReadMe for more details.
Otherwise, to develop locally, we recommend working inside of the container
./container/build.sh ./container/run.sh -it --mount-workspace
cargo build --release mkdir -p /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin cp /workspace/target/release/http /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin cp /workspace/target/release/llmctl /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin cp /workspace/target/release/dynamo-run /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
uv pip install -e . export PYTHONPATH=$PYTHONPATH:/workspace/deploy/dynamo/sdk/src:/workspace/components/planner/src
Conda Environment
Alternately, you can use a conda environment
conda activate
pip install nixl # Or install https://github.com/ai-dynamo/nixl from source
cargo build --release
To install ai-dynamo-runtime from source
cd lib/bindings/python pip install .
cd ../../../ pip install .[all]
To test
docker compose -f deploy/docker-compose.yml up -d cd examples/llm dynamo serve graphs.agg:Frontend -f configs/agg.yaml