Contributing | Ruff (original) (raw)

Welcome! We're happy to have you here. Thank you in advance for your contribution to Ruff.

The Basics

Ruff welcomes contributions in the form of pull requests.

For small changes (e.g., bug fixes), feel free to submit a PR.

For larger changes (e.g., new lint rules, new functionality, new configuration options), consider creating an issue outlining your proposed change. You can also join us on Discord to discuss your idea with the community. We've labeled beginner-friendly tasksin the issue tracker, along with bugsand improvementsthat are ready for contributions.

If you have suggestions on how we might improve the contributing documentation, let us know!

Prerequisites

Ruff is written in Rust. You'll need to install theRust toolchain for development.

You'll also need Insta to update snapshot tests:

[](#%5F%5Fcodelineno-0-1)cargo install cargo-insta

You'll need uv (or pipx and pip) to run Python utility commands.

You can optionally install pre-commit hooks to automatically run the validation checks when making a commit:

[](#%5F%5Fcodelineno-1-1)uv tool install pre-commit [](#%5F%5Fcodelineno-1-2)pre-commit install

We recommend nextest to run Ruff's test suite (via cargo nextest run), though it's not strictly necessary:

[](#%5F%5Fcodelineno-2-1)cargo install cargo-nextest --locked

Throughout this guide, any usages of cargo test can be replaced with cargo nextest run, if you choose to install nextest.

Development

After cloning the repository, run Ruff locally from the repository root with:

[](#%5F%5Fcodelineno-3-1)cargo run -p ruff -- check /path/to/file.py --no-cache

Prior to opening a pull request, ensure that your code has been auto-formatted, and that it passes both the lint and test validation checks:

[](#%5F%5Fcodelineno-4-1)cargo clippy --workspace --all-targets --all-features -- -D warnings # Rust linting [](#%5F%5Fcodelineno-4-2)RUFF_UPDATE_SCHEMA=1 cargo test # Rust testing and updating ruff.schema.json [](#%5F%5Fcodelineno-4-3)uvx pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting, Markdown and Python linting, etc.

These checks will run on GitHub Actions when you open your pull request, but running them locally will save you time and expedite the merge process.

If you're using VS Code, you can also install the recommended rust-analyzer extension to get these checks while editing.

Note that many code changes also require updating the snapshot tests, which is done interactively after running cargo test like so:

If your pull request relates to a specific lint rule, include the category and rule code in the title, as in the following examples:

Your pull request will be reviewed by a maintainer, which may involve a few rounds of iteration prior to merging.

Project Structure

Ruff is structured as a monorepo with a flat crate structure, such that all crates are contained in a flat crates directory.

The vast majority of the code, including all lint rules, lives in the ruff_linter crate (located at crates/ruff_linter). As a contributor, that's the crate that'll be most relevant to you.

At the time of writing, the repository includes the following crates:

Example: Adding a new lint rule

At a high level, the steps involved in adding a new lint rule are as follows:

  1. Determine a name for the new rule as per our rule naming convention (e.g., AssertFalse, as in, "allow assert False").
  2. Create a file for your rule (e.g., crates/ruff_linter/src/rules/flake8_bugbear/rules/assert_false.rs).
  3. In that file, define a violation struct (e.g., pub struct AssertFalse). You can grep for#[derive(ViolationMetadata)] to see examples.
  4. In that file, define a function that adds the violation to the diagnostic list as appropriate (e.g., pub(crate) fn assert_false) based on whatever inputs are required for the rule (e.g., an ast::StmtAssert node).
  5. Define the logic for invoking the diagnostic in crates/ruff_linter/src/checkers/ast/analyze (for AST-based rules), crates/ruff_linter/src/checkers/tokens.rs (for token-based rules),crates/ruff_linter/src/checkers/physical_lines.rs (for text-based rules),crates/ruff_linter/src/checkers/filesystem.rs (for filesystem-based rules), etc. For AST-based rules, you'll likely want to modify analyze/statement.rs (if your rule is based on analyzing statements, like imports) or analyze/expression.rs (if your rule is based on analyzing expressions, like function calls).
  6. Map the violation struct to a rule code in crates/ruff_linter/src/codes.rs (e.g., B011). New rules should be added in RuleGroup::Preview.
  7. Add proper testing for your rule.
  8. Update the generated files (documentation and generated code).

To trigger the violation, you'll likely want to augment the logic in crates/ruff_linter/src/checkers/ast.rsto call your new function at the appropriate time and with the appropriate inputs. The Checkerdefined therein is a Python AST visitor, which iterates over the AST, building up a semantic model, and calling out to lint rule analyzer functions as it goes.

If you need to inspect the AST, you can run cargo dev print-ast with a Python file. Grep for the Diagnostic::new invocations to understand how other, similar rules are implemented.

Once you're satisfied with your code, add tests for your rule (see: rule testing), and regenerate the documentation and associated assets (like our JSON Schema) with cargo dev generate-all.

Finally, submit a pull request, and include the category, rule name, and rule code in the title, as in:

[pycodestyle] Implement redundant-backslash (E502)

Rule naming convention

Like Clippy, Ruff's rule names should make grammatical and logical sense when read as "allow rule"or"allow{rule}" or "allow rule"or"allow{rule} items", as in the context of suppression comments.

For example, AssertFalse fits this convention: it flags assert False statements, and so a suppression comment would be framed as "allow assert False".

As such, rule names should...

When re-implementing rules from other linters, we prioritize adhering to this convention over preserving the original rule name.

Rule testing: fixtures and snapshots

To test rules, Ruff uses snapshots of Ruff's output for a given file (fixture). Generally, there will be one file per rule (e.g., E402.py), and each file will contain all necessary examples of both violations and non-violations. cargo insta review will generate a snapshot file containing Ruff's output for each fixture, which you can then commit alongside your changes.

Once you've completed the code for the rule itself, you can define tests with the following steps:

  1. Add a Python file to crates/ruff_linter/resources/test/fixtures/[linter] that contains the code you want to test. The file name should match the rule name (e.g., E402.py), and it should include examples of both violations and non-violations.
  2. Run Ruff locally against your file and verify the output is as expected. Once you're satisfied with the output (you see the violations you expect, and no others), proceed to the next step. For example, if you're adding a new rule named E402, you would run:
    [](#%5F%5Fcodelineno-6-1)cargo run -p ruff -- check crates/ruff_linter/resources/test/fixtures/pycodestyle/E402.py --no-cache --preview --select E402
    Note: Only a subset of rules are enabled by default. When testing a new rule, ensure that you activate it by adding --select ${rule_code} to the command.
  3. Add the test to the relevant crates/ruff_linter/src/rules/[linter]/mod.rs file. If you're contributing a rule to a pre-existing set, you should be able to find a similar example to pattern-match against. If you're adding a new linter, you'll need to create a new mod.rs file (see, e.g., crates/ruff_linter/src/rules/flake8_bugbear/mod.rs)
  4. Run cargo test. Your test will fail, but you'll be prompted to follow-up with cargo insta review. Run cargo insta review, review and accept the generated snapshot, then commit the snapshot file alongside the rest of your changes.
  5. Run cargo test again to ensure that your test passes.

Example: Adding a new configuration option

Ruff's user-facing settings live in a few different places.

First, the command-line options are defined via the Args struct in crates/ruff/src/args.rs.

Second, the pyproject.toml options are defined in crates/ruff_workspace/src/options.rs (via theOptions struct), crates/ruff_workspace/src/configuration.rs (via the Configuration struct), and crates/ruff_workspace/src/settings.rs (via the Settings struct), which then includes the LinterSettings struct as a field.

These represent, respectively: the schema used to parse the pyproject.toml file; an internal, intermediate representation; and the final, internal representation used to power Ruff.

To add a new configuration option, you'll likely want to modify these latter few files (along withargs.rs, if appropriate). If you want to pattern-match against an existing example, grep fordummy_variable_rgx, which defines a regular expression to match against acceptable unused variables (e.g., _).

Note that plugin-specific configuration options are defined in their own modules (e.g.,Settings in crates/ruff_linter/src/flake8_unused_arguments/settings.rs coupled withFlake8UnusedArgumentsOptions in crates/ruff_workspace/src/options.rs).

Finally, regenerate the documentation and generated code with cargo dev generate-all.

MkDocs

To preview any changes to the documentation locally:

  1. Install the Rust toolchain.
  2. Generate the MkDocs site with:
    [](#%5F%5Fcodelineno-7-1)uv run --no-project --isolated --with-requirements docs/requirements.txt scripts/generate_mkdocs.py
  3. Run the development server with:
    `# For contributors.
    uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.public.yml


# For members of the Astral org, which has access to MkDocs Insiders via sponsorship.
uvx --with-requirements docs/requirements-insiders.txt -- mkdocs serve -f mkdocs.insiders.yml
`

The documentation should then be available locally athttp://127.0.0.1:8000/ruff/.

Release Process

As of now, Ruff has an ad hoc release process: releases are cut with high frequency via GitHub Actions, which automatically generates the appropriate wheels across architectures and publishes them to PyPI.

Ruff follows the semver versioning standard. However, as pre-1.0 software, even patch releases may contain non-backwards-compatible changes.

Creating a new release

  1. Install uv: curl -LsSf https://astral.sh/uv/install.sh | sh
  2. Run ./scripts/release.sh; this command will:
    • Generate a temporary virtual environment with rooster
    • Generate a changelog entry in CHANGELOG.md
    • Update versions in pyproject.toml and Cargo.toml
    • Update references to versions in the README.md and documentation
    • Display contributors for the release
  3. The changelog should then be editorialized for consistency
    • Often labels will be missing from pull requests they will need to be manually organized into the proper section
    • Changes should be edited to be user-facing descriptions, avoiding internal details
  4. Highlight any breaking changes in BREAKING_CHANGES.md
  5. Run cargo check. This should update the lock file with new versions.
  6. Create a pull request with the changelog and version updates
  7. Merge the PR
  8. Run the release workflow with:
    • The new version number (without starting v)
  9. The release workflow will do the following:
    1. Build all the assets. If this fails (even though we tested in step 4), we haven't tagged or uploaded anything, you can restart after pushing a fix. If you just need to rerun the build, make sure you're re-running all the failed jobs and not just a single failed job.
    2. Upload to PyPI.
    3. Create and push the Git tag (as extracted from pyproject.toml). We create the Git tag only after building the wheels and uploading to PyPI, since we can't delete or modify the tag (#4468).
    4. Attach artifacts to draft GitHub release
    5. Trigger downstream repositories. This can fail non-catastrophically, as we can run any downstream jobs manually if needed.
  10. Verify the GitHub release:
  11. The Changelog should match the content of CHANGELOG.md
  12. Append the contributors from the scripts/release.sh script
  13. If needed, update the schemastore.
  14. One can determine if an update is needed whengit diff old-version-tag new-version-tag -- ruff.schema.json returns a non-empty diff.
  15. Once run successfully, you should follow the link in the output to create a PR.
  16. If needed, update the ruff-lsp andruff-vscode repositories and follow the release instructions in those repositories. ruff-lsp should always be updated before ruff-vscode.
    This step is generally not required for a patch release, but should always be done for a minor release.

Ecosystem CI

GitHub Actions will run your changes against a number of real-world projects from GitHub and report on any linter or formatter differences. You can also run those checks locally via:

[](#%5F%5Fcodelineno-9-1)uvx --from ./python/ruff-ecosystem ruff-ecosystem check ruff "./target/debug/ruff" [](#%5F%5Fcodelineno-9-2)uvx --from ./python/ruff-ecosystem ruff-ecosystem format ruff "./target/debug/ruff"

See the ruff-ecosystem package for more details.

Benchmarking and Profiling

We have several ways of benchmarking and profiling Ruff:

NoteWhen running benchmarks, ensure that your CPU is otherwise idle (e.g., close any background applications, like web browsers). You may also want to switch your CPU to a "performance" mode, if it exists, especially when benchmarking short-lived processes.

CPython Benchmark

First, clone CPython. It's a large and diverse Python codebase, which makes it a good target for benchmarking.

[](#%5F%5Fcodelineno-10-1)git clone --branch 3.10 https://github.com/python/cpython.git crates/ruff_linter/resources/test/cpython

Install hyperfine:

To benchmark the release build:

[](#%5F%5Fcodelineno-12-1)cargo build --release && hyperfine --warmup 10 \ [](#%5F%5Fcodelineno-12-2) "./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ --no-cache -e" \ [](#%5F%5Fcodelineno-12-3) "./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ -e" [](#%5F%5Fcodelineno-12-4) [](#%5F%5Fcodelineno-12-5)Benchmark 1: ./target/release/ruff ./crates/ruff_linter/resources/test/cpython/ --no-cache [](#%5F%5Fcodelineno-12-6) Time (mean ± σ): 293.8 ms ± 3.2 ms [User: 2384.6 ms, System: 90.3 ms] [](#%5F%5Fcodelineno-12-7) Range (min … max): 289.9 ms … 301.6 ms 10 runs [](#%5F%5Fcodelineno-12-8) [](#%5F%5Fcodelineno-12-9)Benchmark 2: ./target/release/ruff ./crates/ruff_linter/resources/test/cpython/ [](#%5F%5Fcodelineno-12-10) Time (mean ± σ): 48.0 ms ± 3.1 ms [User: 65.2 ms, System: 124.7 ms] [](#%5F%5Fcodelineno-12-11) Range (min … max): 45.0 ms … 66.7 ms 62 runs [](#%5F%5Fcodelineno-12-12) [](#%5F%5Fcodelineno-12-13)Summary [](#%5F%5Fcodelineno-12-14) './target/release/ruff ./crates/ruff_linter/resources/test/cpython/' ran [](#%5F%5Fcodelineno-12-15) 6.12 ± 0.41 times faster than './target/release/ruff ./crates/ruff_linter/resources/test/cpython/ --no-cache'

To benchmark against the ecosystem's existing tools:

[](#%5F%5Fcodelineno-13-1)hyperfine --ignore-failure --warmup 5 \ [](#%5F%5Fcodelineno-13-2) "./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ --no-cache" \ [](#%5F%5Fcodelineno-13-3) "pyflakes crates/ruff_linter/resources/test/cpython" \ [](#%5F%5Fcodelineno-13-4) "autoflake --recursive --expand-star-imports --remove-all-unused-imports --remove-unused-variables --remove-duplicate-keys resources/test/cpython" \ [](#%5F%5Fcodelineno-13-5) "pycodestyle crates/ruff_linter/resources/test/cpython" \ [](#%5F%5Fcodelineno-13-6) "flake8 crates/ruff_linter/resources/test/cpython" [](#%5F%5Fcodelineno-13-7) [](#%5F%5Fcodelineno-13-8)Benchmark 1: ./target/release/ruff ./crates/ruff_linter/resources/test/cpython/ --no-cache [](#%5F%5Fcodelineno-13-9) Time (mean ± σ): 294.3 ms ± 3.3 ms [User: 2467.5 ms, System: 89.6 ms] [](#%5F%5Fcodelineno-13-10) Range (min … max): 291.1 ms … 302.8 ms 10 runs [](#%5F%5Fcodelineno-13-11) [](#%5F%5Fcodelineno-13-12) Warning: Ignoring non-zero exit code. [](#%5F%5Fcodelineno-13-13) [](#%5F%5Fcodelineno-13-14)Benchmark 2: pyflakes crates/ruff_linter/resources/test/cpython [](#%5F%5Fcodelineno-13-15) Time (mean ± σ): 15.786 s ± 0.143 s [User: 15.560 s, System: 0.214 s] [](#%5F%5Fcodelineno-13-16) Range (min … max): 15.640 s … 16.157 s 10 runs [](#%5F%5Fcodelineno-13-17) [](#%5F%5Fcodelineno-13-18) Warning: Ignoring non-zero exit code. [](#%5F%5Fcodelineno-13-19) [](#%5F%5Fcodelineno-13-20)Benchmark 3: autoflake --recursive --expand-star-imports --remove-all-unused-imports --remove-unused-variables --remove-duplicate-keys resources/test/cpython [](#%5F%5Fcodelineno-13-21) Time (mean ± σ): 6.175 s ± 0.169 s [User: 54.102 s, System: 1.057 s] [](#%5F%5Fcodelineno-13-22) Range (min … max): 5.950 s … 6.391 s 10 runs [](#%5F%5Fcodelineno-13-23) [](#%5F%5Fcodelineno-13-24)Benchmark 4: pycodestyle crates/ruff_linter/resources/test/cpython [](#%5F%5Fcodelineno-13-25) Time (mean ± σ): 46.921 s ± 0.508 s [User: 46.699 s, System: 0.202 s] [](#%5F%5Fcodelineno-13-26) Range (min … max): 46.171 s … 47.863 s 10 runs [](#%5F%5Fcodelineno-13-27) [](#%5F%5Fcodelineno-13-28) Warning: Ignoring non-zero exit code. [](#%5F%5Fcodelineno-13-29) [](#%5F%5Fcodelineno-13-30)Benchmark 5: flake8 crates/ruff_linter/resources/test/cpython [](#%5F%5Fcodelineno-13-31) Time (mean ± σ): 12.260 s ± 0.321 s [User: 102.934 s, System: 1.230 s] [](#%5F%5Fcodelineno-13-32) Range (min … max): 11.848 s … 12.933 s 10 runs [](#%5F%5Fcodelineno-13-33) [](#%5F%5Fcodelineno-13-34) Warning: Ignoring non-zero exit code. [](#%5F%5Fcodelineno-13-35) [](#%5F%5Fcodelineno-13-36)Summary [](#%5F%5Fcodelineno-13-37) './target/release/ruff ./crates/ruff_linter/resources/test/cpython/ --no-cache' ran [](#%5F%5Fcodelineno-13-38) 20.98 ± 0.62 times faster than 'autoflake --recursive --expand-star-imports --remove-all-unused-imports --remove-unused-variables --remove-duplicate-keys resources/test/cpython' [](#%5F%5Fcodelineno-13-39) 41.66 ± 1.18 times faster than 'flake8 crates/ruff_linter/resources/test/cpython' [](#%5F%5Fcodelineno-13-40) 53.64 ± 0.77 times faster than 'pyflakes crates/ruff_linter/resources/test/cpython' [](#%5F%5Fcodelineno-13-41) 159.43 ± 2.48 times faster than 'pycodestyle crates/ruff_linter/resources/test/cpython'

To benchmark a subset of rules, e.g. LineTooLong and DocLineTooLong:

[](#%5F%5Fcodelineno-14-1)cargo build --release && hyperfine --warmup 10 \ [](#%5F%5Fcodelineno-14-2) "./target/release/ruff check ./crates/ruff_linter/resources/test/cpython/ --no-cache -e --select W505,E501"

You can run uv venv --project ./scripts/benchmarks, activate the venv and then run uv sync --project ./scripts/benchmarks to create a working environment for the above. All reported benchmarks were computed using the versions specified by./scripts/benchmarks/pyproject.toml on Python 3.11.

To benchmark Pylint, remove the following files from the CPython repository:

[](#%5F%5Fcodelineno-15-1)rm Lib/test/bad_coding.py \ [](#%5F%5Fcodelineno-15-2) Lib/test/bad_coding2.py \ [](#%5F%5Fcodelineno-15-3) Lib/test/bad_getattr.py \ [](#%5F%5Fcodelineno-15-4) Lib/test/bad_getattr2.py \ [](#%5F%5Fcodelineno-15-5) Lib/test/bad_getattr3.py \ [](#%5F%5Fcodelineno-15-6) Lib/test/badcert.pem \ [](#%5F%5Fcodelineno-15-7) Lib/test/badkey.pem \ [](#%5F%5Fcodelineno-15-8) Lib/test/badsyntax_3131.py \ [](#%5F%5Fcodelineno-15-9) Lib/test/badsyntax_future10.py \ [](#%5F%5Fcodelineno-15-10) Lib/test/badsyntax_future3.py \ [](#%5F%5Fcodelineno-15-11) Lib/test/badsyntax_future4.py \ [](#%5F%5Fcodelineno-15-12) Lib/test/badsyntax_future5.py \ [](#%5F%5Fcodelineno-15-13) Lib/test/badsyntax_future6.py \ [](#%5F%5Fcodelineno-15-14) Lib/test/badsyntax_future7.py \ [](#%5F%5Fcodelineno-15-15) Lib/test/badsyntax_future8.py \ [](#%5F%5Fcodelineno-15-16) Lib/test/badsyntax_future9.py \ [](#%5F%5Fcodelineno-15-17) Lib/test/badsyntax_pep3120.py \ [](#%5F%5Fcodelineno-15-18) Lib/test/test_asyncio/test_runners.py \ [](#%5F%5Fcodelineno-15-19) Lib/test/test_copy.py \ [](#%5F%5Fcodelineno-15-20) Lib/test/test_inspect.py \ [](#%5F%5Fcodelineno-15-21) Lib/test/test_typing.py

Then, from crates/ruff_linter/resources/test/cpython, run: time pylint -j 0 -E $(git ls-files '*.py'). This will execute Pylint with maximum parallelism and only report errors.

To benchmark Pyupgrade, run the following from crates/ruff_linter/resources/test/cpython:

[](#%5F%5Fcodelineno-16-1)hyperfine --ignore-failure --warmup 5 --prepare "git reset --hard HEAD" \ [](#%5F%5Fcodelineno-16-2) "find . -type f -name \"*.py\" | xargs -P 0 pyupgrade --py311-plus" [](#%5F%5Fcodelineno-16-3) [](#%5F%5Fcodelineno-16-4)Benchmark 1: find . -type f -name "*.py" | xargs -P 0 pyupgrade --py311-plus [](#%5F%5Fcodelineno-16-5) Time (mean ± σ): 30.119 s ± 0.195 s [User: 28.638 s, System: 0.390 s] [](#%5F%5Fcodelineno-16-6) Range (min … max): 29.813 s … 30.356 s 10 runs

Microbenchmarks

The ruff_benchmark crate benchmarks the linter and the formatter on individual files.

You can run the benchmarks with

cargo benchmark is an alias for cargo bench -p ruff_benchmark --bench linter --bench formatter --

Benchmark-driven Development

Ruff uses Criterion.rs for benchmarks. You can use--save-baseline=<name> to store an initial baseline benchmark (e.g., on main) and then use--benchmark=<name> to compare against that benchmark. Criterion will print a message telling you if the benchmark improved/regressed compared to that baseline.

[](#%5F%5Fcodelineno-18-1)# Run once on your "baseline" code [](#%5F%5Fcodelineno-18-2)cargo bench -p ruff_benchmark -- --save-baseline=main [](#%5F%5Fcodelineno-18-3) [](#%5F%5Fcodelineno-18-4)# Then iterate with [](#%5F%5Fcodelineno-18-5)cargo bench -p ruff_benchmark -- --baseline=main

PR Summary

You can use --save-baseline and critcmp to get a pretty comparison between two recordings. This is useful to illustrate the improvements of a PR.

[](#%5F%5Fcodelineno-19-1)# On main [](#%5F%5Fcodelineno-19-2)cargo bench -p ruff_benchmark -- --save-baseline=main [](#%5F%5Fcodelineno-19-3) [](#%5F%5Fcodelineno-19-4)# After applying your changes [](#%5F%5Fcodelineno-19-5)cargo bench -p ruff_benchmark -- --save-baseline=pr [](#%5F%5Fcodelineno-19-6) [](#%5F%5Fcodelineno-19-7)critcmp main pr

You must install critcmp for the comparison.

Tips

Profiling Projects

You can either use the microbenchmarks from above or a project directory for benchmarking. There are a lot of profiling tools out there,The Rust Performance Book lists some examples.

Linux

Install perf and build ruff_benchmark with the profiling profile and then run it with perf

[](#%5F%5Fcodelineno-21-1)cargo bench -p ruff_benchmark --no-run --profile=profiling && perf record --call-graph dwarf -F 9999 cargo bench -p ruff_benchmark --profile=profiling -- --profile-time=1

You can also use the ruff_dev launcher to run ruff check multiple times on a repository to gather enough samples for a good flamegraph (change the 999, the sample rate, and the 30, the number of checks, to your liking)

[](#%5F%5Fcodelineno-22-1)cargo build --bin ruff_dev --profile=profiling [](#%5F%5Fcodelineno-22-2)perf record -g -F 999 target/profiling/ruff_dev repeat --repeat 30 --exit-zero --no-cache path/to/cpython > /dev/null

Then convert the recorded profile

[](#%5F%5Fcodelineno-23-1)perf script -F +pid > /tmp/test.perf

You can now view the converted file with firefox profiler, with a more in-depth guide here

An alternative is to convert the perf data to flamegraph.svg usingflamegraph (cargo install flamegraph):

[](#%5F%5Fcodelineno-24-1)flamegraph --perfdata perf.data --no-inline

Mac

Install cargo-instruments:

[](#%5F%5Fcodelineno-25-1)cargo install cargo-instruments

Then run the profiler with

[](#%5F%5Fcodelineno-26-1)cargo instruments -t time --bench linter --profile profiling -p ruff_benchmark -- --profile-time=1

Otherwise, follow the instructions from the linux section.

cargo dev

cargo dev is a shortcut for cargo run --package ruff_dev --bin ruff_dev. You can run some useful utils with it:

[](#%5F%5Fcodelineno-27-1)[ [](#%5F%5Fcodelineno-27-2) If( [](#%5F%5Fcodelineno-27-3) StmtIf { [](#%5F%5Fcodelineno-27-4) range: 0..13, [](#%5F%5Fcodelineno-27-5) test: Constant( [](#%5F%5Fcodelineno-27-6) ExprConstant { [](#%5F%5Fcodelineno-27-7) range: 3..7, [](#%5F%5Fcodelineno-27-8) value: Bool( [](#%5F%5Fcodelineno-27-9) true, [](#%5F%5Fcodelineno-27-10) ), [](#%5F%5Fcodelineno-27-11) kind: None, [](#%5F%5Fcodelineno-27-12) }, [](#%5F%5Fcodelineno-27-13) ), [](#%5F%5Fcodelineno-27-14) body: [ [](#%5F%5Fcodelineno-27-15) Pass( [](#%5F%5Fcodelineno-27-16) StmtPass { [](#%5F%5Fcodelineno-27-17) range: 9..13, [](#%5F%5Fcodelineno-27-18) }, [](#%5F%5Fcodelineno-27-19) ), [](#%5F%5Fcodelineno-27-20) ], [](#%5F%5Fcodelineno-27-21) orelse: [], [](#%5F%5Fcodelineno-27-22) }, [](#%5F%5Fcodelineno-27-23) ), [](#%5F%5Fcodelineno-27-24)]

[](#%5F%5Fcodelineno-28-1)0 If 2 [](#%5F%5Fcodelineno-28-2)3 True 7 [](#%5F%5Fcodelineno-28-3)7 Colon 8 [](#%5F%5Fcodelineno-28-4)9 Pass 13 [](#%5F%5Fcodelineno-28-5)14 Comment( [](#%5F%5Fcodelineno-28-6) "# comment", [](#%5F%5Fcodelineno-28-7)) 23 [](#%5F%5Fcodelineno-28-8)23 Newline 24

[](#%5F%5Fcodelineno-29-1)Module { [](#%5F%5Fcodelineno-29-2) body: [ [](#%5F%5Fcodelineno-29-3) Compound( [](#%5F%5Fcodelineno-29-4) If( [](#%5F%5Fcodelineno-29-5) If { [](#%5F%5Fcodelineno-29-6) test: Name( [](#%5F%5Fcodelineno-29-7) Name { [](#%5F%5Fcodelineno-29-8) value: "True", [](#%5F%5Fcodelineno-29-9) lpar: [], [](#%5F%5Fcodelineno-29-10) rpar: [], [](#%5F%5Fcodelineno-29-11) }, [](#%5F%5Fcodelineno-29-12) ), [](#%5F%5Fcodelineno-29-13) body: SimpleStatementSuite( [](#%5F%5Fcodelineno-29-14) SimpleStatementSuite { [](#%5F%5Fcodelineno-29-15) body: [ [](#%5F%5Fcodelineno-29-16) Pass( [](#%5F%5Fcodelineno-29-17) Pass { [](#%5F%5Fcodelineno-29-18) semicolon: None, [](#%5F%5Fcodelineno-29-19) }, [](#%5F%5Fcodelineno-29-20) ), [](#%5F%5Fcodelineno-29-21) ], [](#%5F%5Fcodelineno-29-22) leading_whitespace: SimpleWhitespace( [](#%5F%5Fcodelineno-29-23) " ", [](#%5F%5Fcodelineno-29-24) ), [](#%5F%5Fcodelineno-29-25) trailing_whitespace: TrailingWhitespace { [](#%5F%5Fcodelineno-29-26) whitespace: SimpleWhitespace( [](#%5F%5Fcodelineno-29-27) " ", [](#%5F%5Fcodelineno-29-28) ), [](#%5F%5Fcodelineno-29-29) comment: Some( [](#%5F%5Fcodelineno-29-30) Comment( [](#%5F%5Fcodelineno-29-31) "# comment", [](#%5F%5Fcodelineno-29-32) ), [](#%5F%5Fcodelineno-29-33) ), [](#%5F%5Fcodelineno-29-34) newline: Newline( [](#%5F%5Fcodelineno-29-35) None, [](#%5F%5Fcodelineno-29-36) Real, [](#%5F%5Fcodelineno-29-37) ), [](#%5F%5Fcodelineno-29-38) }, [](#%5F%5Fcodelineno-29-39) }, [](#%5F%5Fcodelineno-29-40) ), [](#%5F%5Fcodelineno-29-41) orelse: None, [](#%5F%5Fcodelineno-29-42) leading_lines: [], [](#%5F%5Fcodelineno-29-43) whitespace_before_test: SimpleWhitespace( [](#%5F%5Fcodelineno-29-44) " ", [](#%5F%5Fcodelineno-29-45) ), [](#%5F%5Fcodelineno-29-46) whitespace_after_test: SimpleWhitespace( [](#%5F%5Fcodelineno-29-47) "", [](#%5F%5Fcodelineno-29-48) ), [](#%5F%5Fcodelineno-29-49) is_elif: false, [](#%5F%5Fcodelineno-29-50) }, [](#%5F%5Fcodelineno-29-51) ), [](#%5F%5Fcodelineno-29-52) ), [](#%5F%5Fcodelineno-29-53) ], [](#%5F%5Fcodelineno-29-54) header: [], [](#%5F%5Fcodelineno-29-55) footer: [], [](#%5F%5Fcodelineno-29-56) default_indent: " ", [](#%5F%5Fcodelineno-29-57) default_newline: "\n", [](#%5F%5Fcodelineno-29-58) has_trailing_newline: true, [](#%5F%5Fcodelineno-29-59) encoding: "utf-8", [](#%5F%5Fcodelineno-29-60)}

Subsystems

Compilation Pipeline

If we view Ruff as a compiler, in which the inputs are paths to Python files and the outputs are diagnostics, then our current compilation pipeline proceeds as follows:

  1. File discovery: Given paths like foo/, locate all Python files in any specified subdirectories, taking into account our hierarchical settings system and any exclude options.
  2. Package resolution: Determine the "package root" for every file by traversing over its parent directories and looking for __init__.py files.
  3. Cache initialization: For every "package root", initialize an empty cache.
  4. Analysis: For every file, in parallel:
    1. Cache read: If the file is cached (i.e., its modification timestamp hasn't changed since it was last analyzed), short-circuit, and return the cached diagnostics.
    2. Tokenization: Run the lexer over the file to generate a token stream.
    3. Indexing: Extract metadata from the token stream, such as: comment ranges, # noqa locations, # isort: off locations, "doc lines", etc.
    4. Token-based rule evaluation: Run any lint rules that are based on the contents of the token stream (e.g., commented-out code).
    5. Filesystem-based rule evaluation: Run any lint rules that are based on the contents of the filesystem (e.g., lack of __init__.py file in a package).
    6. Logical line-based rule evaluation: Run any lint rules that are based on logical lines (e.g., stylistic rules).
    7. Parsing: Run the parser over the token stream to produce an AST. (This consumes the token stream, so anything that relies on the token stream needs to happen before parsing.)
    8. AST-based rule evaluation: Run any lint rules that are based on the AST. This includes the vast majority of lint rules. As part of this step, we also build the semantic model for the current file as we traverse over the AST. Some lint rules are evaluated eagerly, as we iterate over the AST, while others are evaluated in a deferred manner (e.g., unused imports, since we can't determine whether an import is unused until we've finished analyzing the entire file), after we've finished the initial traversal.
    9. Import-based rule evaluation: Run any lint rules that are based on the module's imports (e.g., import sorting). These could, in theory, be included in the AST-based rule evaluation phase — they're just separated for simplicity.
    10. Physical line-based rule evaluation: Run any lint rules that are based on physical lines (e.g., line-length).
    11. Suppression enforcement: Remove any violations that are suppressed via # noqa directives or per-file-ignores.
    12. Cache write: Write the generated diagnostics to the package cache using the file as a key.
  5. Reporting: Print diagnostics in the specified format (text, JSON, etc.), to the specified output channel (stdout, a file, etc.).

Import Categorization

To understand Ruff's import categorization system, we first need to define two concepts:

For example, given:

[](#%5F%5Fcodelineno-30-1)my_project [](#%5F%5Fcodelineno-30-2)├── pyproject.toml [](#%5F%5Fcodelineno-30-3)└── src [](#%5F%5Fcodelineno-30-4) └── foo [](#%5F%5Fcodelineno-30-5) ├── __init__.py [](#%5F%5Fcodelineno-30-6) └── bar [](#%5F%5Fcodelineno-30-7) ├── __init__.py [](#%5F%5Fcodelineno-30-8) └── baz.py

Then when analyzing baz.py, the project root would be the top-level directory (./my_project), and the package root would be ./my_project/src/foo.

Project root

The project root does not have a significant impact beyond that all relative paths within the loaded configuration file are resolved relative to the project root.

For example, to indicate that bar above is a namespace package (it isn't, but let's run with it), the pyproject.toml would list namespace-packages = ["./src/bar"], which would resolve to my_project/src/bar.

The same logic applies when providing a configuration file via --config. In that case, the_current working directory_ is used as the project root, and so all paths in that configuration file are resolved relative to the current working directory. (As a general rule, we want to avoid relying on the current working directory as much as possible, to ensure that Ruff exhibits the same behavior regardless of where and how you invoke it — but that's hard to avoid in this case.)

Additionally, if a pyproject.toml file extends another configuration file, Ruff will still use the directory containing that pyproject.toml file as the project root. For example, if./my_project/pyproject.toml contains:

[](#%5F%5Fcodelineno-31-1)[tool.ruff] [](#%5F%5Fcodelineno-31-2)extend = "/path/to/pyproject.toml"

Then Ruff will use ./my_project as the project root, even though the configuration file extends/path/to/pyproject.toml. As such, if the configuration file at /path/to/pyproject.toml contains any relative paths, they will be resolved relative to ./my_project.

If a project uses nested configuration files, then Ruff would detect multiple project roots, one for each configuration file.

Package root

The package root is used to determine a file's "module path". Consider, again, baz.py. In that case, ./my_project/src/foo was identified as the package root, so the module path for baz.pywould resolve to foo.bar.baz — as computed by taking the relative path from the package root (inclusive of the root itself). The module path can be thought of as "the path you would use to import the module" (e.g., import foo.bar.baz).

The package root and module path are used to, e.g., convert relative to absolute imports, and for import categorization, as described below.

Import categorization

When sorting and formatting import blocks, Ruff categorizes every import into one of five categories:

  1. "Future": the import is a __future__ import. That's easy: just look at the name of the imported module!
  2. "Standard library": the import comes from the Python standard library (e.g., import os). This is easy too: we include a list of all known standard library modules in Ruff itself, so it's a simple lookup.
  3. "Local folder": the import is a relative import (e.g., from .foo import bar). This is easy too: just check if the import includes a level (i.e., a dot-prefix).
  4. "First party": the import is part of the current project. (More on this below.)
  5. "Third party": everything else.

The real challenge lies in determining whether an import is first-party — everything else is either trivial, or (as in the case of third-party) merely defined as "not first-party".

There are three ways in which an import can be categorized as "first-party":

  1. Explicit settings: the import is marked as such via the known-first-party setting. (This should generally be seen as an escape hatch.)
  2. Same-package: the imported module is in the same package as the current file. This gets back to the importance of the "package root" and the file's "module path". Imagine that we're analyzing baz.py above. If baz.py contains any imports that appear to come from the foo package (e.g., from foo import bar or import foo.bar), they'll be classified as first-party automatically. This check is as simple as comparing the first segment of the current file's module path to the first segment of the import.
  3. Source roots: Ruff supports a src setting, which sets the directories to scan when identifying first-party imports. The algorithm is straightforward: given an import, like import foo, iterate over the directories enumerated in the src setting and, for each directory, check for the existence of a subdirectory foo or a file foo.py.

By default, src is set to the project root, along with "src" subdirectory in the project root. This ensures that Ruff supports both flat and "src" layouts out of the box.