Developer Guide - vLLM (original) (raw)

Contributing to vLLM

Thank you for your interest in contributing to vLLM! Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large. There are several ways you can contribute to the project:

We also believe in the power of community support; thus, answering queries, offering PR reviews, and assisting others are also highly regarded and beneficial contributions.

Finally, one of the most impactful ways to support us is by raising awareness about vLLM. Talk about it in your blog posts and highlight how it's driving your incredible projects. Express your support on social media if you're using vLLM, or simply offer your appreciation by starring our repository!

Job Board

Unsure on where to start? Check out the following links for tasks to work on:

License

See LICENSE.

Developing

Depending on the kind of development you'd like to do (e.g. Python, CUDA), you can choose to build vLLM with or without compilation. Check out the building from source documentation for details.

Building the docs with MkDocs

Introduction to MkDocs

MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

Install MkDocs and Plugins

Install MkDocs along with the plugins used in the vLLM documentation, as well as required dependencies:

[](#%5F%5Fcodelineno-0-1)pip install -r requirements/docs.txt

Note

Ensure that your Python version is compatible with the plugins (e.g., mkdocs-awesome-nav requires Python 3.10+)

Verify Installation

Confirm that MkDocs is correctly installed:

Example output:

[](#%5F%5Fcodelineno-2-1)mkdocs, version 1.6.1 from /opt/miniconda3/envs/mkdoc/lib/python3.10/site-packages/mkdocs (Python 3.10)

Clone the vLLM repository

[](#%5F%5Fcodelineno-3-1)git clone https://github.com/vllm-project/vllm.git [](#%5F%5Fcodelineno-3-2)cd vllm

Start the Development Server

MkDocs comes with a built-in dev-server that lets you preview your documentation as you work on it. Make sure you're in the same directory as the mkdocs.yml configuration file, and then start the server by running the mkdocs serve command:

Example output:

[](#%5F%5Fcodelineno-5-1)INFO - Documentation built in 106.83 seconds [](#%5F%5Fcodelineno-5-2)INFO - [22:02:02] Watching paths for changes: 'docs', 'mkdocs.yaml' [](#%5F%5Fcodelineno-5-3)INFO - [22:02:02] Serving on http://127.0.0.1:8000/

View in Your Browser

Open up http://127.0.0.1:8000/ in your browser to see a live preview:.

Learn More

For additional features and advanced configurations, refer to the official MkDocs Documentation.

Testing

[](#%5F%5Fcodelineno-6-1)pip install -r requirements/dev.txt [](#%5F%5Fcodelineno-6-2) [](#%5F%5Fcodelineno-6-3)# Linting, formatting and static type checking [](#%5F%5Fcodelineno-6-4)pre-commit install --hook-type pre-commit --hook-type commit-msg [](#%5F%5Fcodelineno-6-5) [](#%5F%5Fcodelineno-6-6)# You can manually run pre-commit with [](#%5F%5Fcodelineno-6-7)pre-commit run --all-files [](#%5F%5Fcodelineno-6-8) [](#%5F%5Fcodelineno-6-9)# To manually run something from CI that does not run [](#%5F%5Fcodelineno-6-10)# locally by default, you can run: [](#%5F%5Fcodelineno-6-11)pre-commit run mypy-3.9 --hook-stage manual --all-files [](#%5F%5Fcodelineno-6-12) [](#%5F%5Fcodelineno-6-13)# Unit tests [](#%5F%5Fcodelineno-6-14)pytest tests/ [](#%5F%5Fcodelineno-6-15) [](#%5F%5Fcodelineno-6-16)# Run tests for a single test file with detailed output [](#%5F%5Fcodelineno-6-17)pytest -s -v tests/test_logger.py

Tip

Since the docker/Dockerfile ships with Python 3.12, all tests in CI (except mypy) are run with Python 3.12.

Therefore, we recommend developing with Python 3.12 to minimise the chance of your local environment clashing with our CI environment.

Note

Currently, the repository is not fully checked by mypy.

Note

Currently, not all unit tests pass when run on CPU platforms. If you don't have access to a GPU platform to run unit tests locally, rely on the continuous integration system to run the tests for now.

Issues

If you encounter a bug or have a feature request, please search existing issues first to see if it has already been reported. If not, please file a new issue, providing as much relevant information as possible.

Warning

If you discover a security vulnerability, please follow the instructions here.

Pull Requests & Code Reviews

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

Note

If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR needs to meet the following code quality standards:

Adding or Changing Kernels

Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feels confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. All of your contributions help make vLLM a great tool and community for everyone!