GitHub - vllm-project/vllm-openvino (original) (raw)

Installation

vLLM powered by OpenVINO supports all LLM models from vLLM supported models list and can perform optimal model serving on all x86-64 CPUs with, at least, AVX2 support, as well as on both integrated and discrete IntelĀ® GPUs (the list of supported GPUs).

Note

There are no pre-built wheels or images for this device, so you must build vLLM from source.

Requirements

Set up using Python

Pre-built wheels

Currently, there are no pre-built OpenVINO wheels.

Build wheel from source

First, install Python and ensure you have the latest pip. For example, on Ubuntu 22.04, you can run:

sudo apt-get update -y sudo apt-get install python3-pip pip install --upgrade pip

Second, clone vllm-openvino and install prerequisites for the vLLM OpenVINO backend installation:

git clone https://github.com/vllm-project/vllm-openvino.git cd vllm-openvino

Finally, install vLLM with OpenVINO backend:

VLLM_TARGET_DEVICE="empty" PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu" python -m pip install -v .

Note

In x86, triton will be installed by vllm. But in OpenVINO, triton doesn't work correctly. we need to uninstall it via python3 -m pip uninstall -y triton

Set up using Docker

Pre-built images

Currently, there are no pre-built OpenVINO images.

Build image from source

docker build . -t vllm-openvino-env . docker run -it --rm vllm-openvino-env

Extra information

Supported features

OpenVINO vLLM backend supports the following advanced vLLM features:

Note

Simultaneous usage of both --enable-prefix-caching and --enable-chunked-prefill is not yet implemented.

Note

--enable-chunked-prefill is broken on openvino==2025.2, to use this feature update openvino to a nightly 2025.3 release or openvino==2025.1.

Performance tips

vLLM OpenVINO backend environment variables

CPU performance tips

CPU uses the following environment variables to control behavior:

To enable better TPOT / TTFT latency, you can use vLLM's chunked prefill feature (--enable-chunked-prefill). Based on the experiments, the recommended batch size is 256 (--max-num-batched-tokens)

OpenVINO best known configuration for CPU is:

$ VLLM_OPENVINO_KVCACHE_SPACE=100 VLLM_OPENVINO_KV_CACHE_PRECISION=u8 VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON
python3 vllm/benchmarks/benchmark_throughput.py --model meta-llama/Llama-2-7b-chat-hf --dataset vllm/benchmarks/ShareGPT_V3_unfiltered_cleaned_split.json --enable-chunked-prefill --max-num-batched-tokens 256

GPU performance tips

GPU device implements the logic for automatic detection of available GPU memory and, by default, tries to reserve as much memory as possible for the KV cache (taking into account gpu_memory_utilization option). However, this behavior can be overridden by explicitly specifying the desired amount of memory for the KV cache using VLLM_OPENVINO_KVCACHE_SPACE environment variable (e.g, VLLM_OPENVINO_KVCACHE_SPACE=8 means 8 GB space for KV cache).

Additionally, GPU device supports VLLM_OPENVINO_KV_CACHE_PRECISION (e.g. i8 or fp16) to control KV cache precision (default value is device-specific).

Currently, the best performance using GPU can be achieved with the default vLLM execution parameters for models with quantized weights (8 and 4-bit integer data types are supported) and preemption-mode=swap.

OpenVINO best known configuration for GPU is:

$ VLLM_OPENVINO_DEVICE=GPU VLLM_OPENVINO_KV_CACHE_PRECISION=i8 VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON
python3 vllm/benchmarks/benchmark_throughput.py --model meta-llama/Llama-2-7b-chat-hf --dataset vllm/benchmarks/ShareGPT_V3_unfiltered_cleaned_split.json

Limitations