Skip to content

Latest commit

 

History

History
548 lines (396 loc) · 22.9 KB

CONTRIBUTING.md

File metadata and controls

548 lines (396 loc) · 22.9 KB

Contributing to Ruff

Welcome! We're happy to have you here. Thank you in advance for your contribution to Ruff.

The Basics

Ruff welcomes contributions in the form of Pull Requests.

For small changes (e.g., bug fixes), feel free to submit a PR.

For larger changes (e.g., new lint rules, new functionality, new configuration options), consider creating an issue outlining your proposed change. You can also join us on Discord to discuss your idea with the community.

If you're looking for a place to start, we recommend implementing a new lint rule (see: Adding a new lint rule, which will allow you to learn from and pattern-match against the examples in the existing codebase. Many lint rules are inspired by existing Python plugins, which can be used as a reference implementation.

As a concrete example: consider taking on one of the rules from the flake8-pyi plugin, and looking to the originating Python source for guidance.

Prerequisites

Ruff is written in Rust. You'll need to install the Rust toolchain for development.

You'll also need Insta to update snapshot tests:

cargo install cargo-insta

and pre-commit to run some validation checks:

pipx install pre-commit  # or `pip install pre-commit` if you have a virtualenv

Development

After cloning the repository, run Ruff locally with:

cargo run -p ruff_cli -- check /path/to/file.py --no-cache

Prior to opening a pull request, ensure that your code has been auto-formatted, and that it passes both the lint and test validation checks:

cargo clippy --workspace --all-targets --all-features -- -D warnings  # Rust linting
RUFF_UPDATE_SCHEMA=1 cargo test  # Rust testing and updating ruff.schema.json
pre-commit run --all-files --show-diff-on-failure  # Rust and Python formatting, Markdown and Python linting, etc.

These checks will run on GitHub Actions when you open your Pull Request, but running them locally will save you time and expedite the merge process.

Note that many code changes also require updating the snapshot tests, which is done interactively after running cargo test like so:

cargo insta review

Your Pull Request will be reviewed by a maintainer, which may involve a few rounds of iteration prior to merging.

Project Structure

Ruff is structured as a monorepo with a flat crate structure, such that all crates are contained in a flat crates directory.

The vast majority of the code, including all lint rules, lives in the ruff crate (located at crates/ruff). As a contributor, that's the crate that'll be most relevant to you.

At time of writing, the repository includes the following crates:

  • crates/ruff: library crate containing all lint rules and the core logic for running them.
  • crates/ruff_benchmark: binary crate for running micro-benchmarks.
  • crates/ruff_cache: library crate for caching lint results.
  • crates/ruff_cli: binary crate containing Ruff's command-line interface.
  • crates/ruff_dev: binary crate containing utilities used in the development of Ruff itself (e.g., cargo dev generate-all).
  • crates/ruff_diagnostics: library crate for the lint diagnostics APIs.
  • crates/ruff_formatter: library crate for generic code formatting logic based on an intermediate representation.
  • crates/ruff_index: library crate inspired by rustc_index.
  • crates/ruff_macros: library crate containing macros used by Ruff.
  • crates/ruff_python_ast: library crate containing Python-specific AST types and utilities.
  • crates/ruff_python_formatter: library crate containing Python-specific code formatting logic.
  • crates/ruff_python_semantic: library crate containing Python-specific semantic analysis logic, including Ruff's semantic model.
  • crates/ruff_python_stdlib: library crate containing Python-specific standard library data.
  • crates/ruff_python_whitespace: library crate containing Python-specific whitespace analysis logic.
  • crates/ruff_rustpython: library crate containing RustPython-specific utilities.
  • crates/ruff_testing_macros: library crate containing macros used for testing Ruff.
  • crates/ruff_textwrap: library crate to indent and dedent Python source code.
  • crates/ruff_wasm: library crate for exposing Ruff as a WebAssembly module.

Example: Adding a new lint rule

At a high level, the steps involved in adding a new lint rule are as follows:

  1. Determine a name for the new rule as per our rule naming convention (e.g., AssertFalse, as in, "allow assert False").

  2. Create a file for your rule (e.g., crates/ruff/src/rules/flake8_bugbear/rules/assert_false.rs).

  3. In that file, define a violation struct (e.g., pub struct AssertFalse). You can grep for #[violation] to see examples.

  4. In that file, define a function that adds the violation to the diagnostic list as appropriate (e.g., pub(crate) fn assert_false) based on whatever inputs are required for the rule (e.g., an ast::StmtAssert node).

  5. Define the logic for triggering the violation in crates/ruff/src/checkers/ast/mod.rs (for AST-based checks), crates/ruff/src/checkers/tokens.rs (for token-based checks), crates/ruff/src/checkers/lines.rs (for text-based checks), or crates/ruff/src/checkers/filesystem.rs (for filesystem-based checks).

  6. Map the violation struct to a rule code in crates/ruff/src/codes.rs (e.g., B011).

  7. Add proper testing for your rule.

  8. Update the generated files (documentation and generated code).

To trigger the violation, you'll likely want to augment the logic in crates/ruff/src/checkers/ast.rs to call your new function at the appropriate time and with the appropriate inputs. The Checker defined therein is a Python AST visitor, which iterates over the AST, building up a semantic model, and calling out to lint rule analyzer functions as it goes.

If you need to inspect the AST, you can run cargo dev print-ast with a Python file. Grep for the Diagnostic::new invocations to understand how other, similar rules are implemented.

Once you're satisfied with your code, add tests for your rule. See rule testing for more details.

Finally, regenerate the documentation and other generated assets (like our JSON Schema) with: cargo dev generate-all.

Rule naming convention

Like Clippy, Ruff's rule names should make grammatical and logical sense when read as "allow ${rule}" or "allow ${rule} items", as in the context of suppression comments.

For example, AssertFalse fits this convention: it flags assert False statements, and so a suppression comment would be framed as "allow assert False".

As such, rule names should...

  • Highlight the pattern that is being linted against, rather than the preferred alternative. For example, AssertFalse guards against assert False statements.

  • Not contain instructions on how to fix the violation, which instead belong in the rule documentation and the autofix_title.

  • Not contain a redundant prefix, like Disallow or Banned, which are already implied by the convention.

When re-implementing rules from other linters, we prioritize adhering to this convention over preserving the original rule name.

Rule testing: fixtures and snapshots

To test rules, Ruff uses snapshots of Ruff's output for a given file (fixture). Generally, there will be one file per rule (e.g., E402.py), and each file will contain all necessary examples of both violations and non-violations. cargo insta review will generate a snapshot file containing Ruff's output for each fixture, which you can then commit alongside your changes.

Once you've completed the code for the rule itself, you can define tests with the following steps:

  1. Add a Python file to crates/ruff/resources/test/fixtures/[linter] that contains the code you want to test. The file name should match the rule name (e.g., E402.py), and it should include examples of both violations and non-violations.

  2. Run Ruff locally against your file and verify the output is as expected. Once you're satisfied with the output (you see the violations you expect, and no others), proceed to the next step. For example, if you're adding a new rule named E402, you would run:

    cargo run -p ruff_cli -- check crates/ruff/resources/test/fixtures/pycodestyle/E402.py --no-cache
  3. Add the test to the relevant crates/ruff/src/rules/[linter]/mod.rs file. If you're contributing a rule to a pre-existing set, you should be able to find a similar example to pattern-match against. If you're adding a new linter, you'll need to create a new mod.rs file (see, e.g., crates/ruff/src/rules/flake8_bugbear/mod.rs)

  4. Run cargo test. Your test will fail, but you'll be prompted to follow-up with cargo insta review. Run cargo insta review, review and accept the generated snapshot, then commit the snapshot file alongside the rest of your changes.

  5. Run cargo test again to ensure that your test passes.

Example: Adding a new configuration option

Ruff's user-facing settings live in a few different places.

First, the command-line options are defined via the Cli struct in crates/ruff/src/cli.rs.

Second, the pyproject.toml options are defined in crates/ruff/src/settings/options.rs (via the Options struct), crates/ruff/src/settings/configuration.rs (via the Configuration struct), and crates/ruff/src/settings/mod.rs (via the Settings struct). These represent, respectively: the schema used to parse the pyproject.toml file; an internal, intermediate representation; and the final, internal representation used to power Ruff.

To add a new configuration option, you'll likely want to modify these latter few files (along with cli.rs, if appropriate). If you want to pattern-match against an existing example, grep for dummy_variable_rgx, which defines a regular expression to match against acceptable unused variables (e.g., _).

Note that plugin-specific configuration options are defined in their own modules (e.g., crates/ruff/src/flake8_unused_arguments/settings.rs).

You may also want to add the new configuration option to the flake8-to-ruff tool, which is responsible for converting flake8 configuration files to Ruff's TOML format. This logic lives in crates/ruff/src/flake8_to_ruff/converter.rs.

Finally, regenerate the documentation and generated code with cargo dev generate-all.

MkDocs

To preview any changes to the documentation locally:

  1. Install the Rust toolchain.

  2. Install MkDocs and Material for MkDocs with:

    pip install -r docs/requirements.txt
  3. Generate the MkDocs site with:

    python scripts/generate_mkdocs.py
  4. Run the development server with:

    mkdocs serve

The documentation should then be available locally at http://127.0.0.1:8000/docs/.

Release Process

As of now, Ruff has an ad hoc release process: releases are cut with high frequency via GitHub Actions, which automatically generates the appropriate wheels across architectures and publishes them to PyPI.

Ruff follows the semver versioning standard. However, as pre-1.0 software, even patch releases may contain non-backwards-compatible changes.

Creating a new release

  1. Update the version with rg 0.0.269 --files-with-matches | xargs sed -i 's/0.0.269/0.0.270/g'
  2. Update BREAKING_CHANGES.md
  3. Create a PR with the version and BREAKING_CHANGES.md updated
  4. Merge the PR
  5. Run the release workflow with the version number (without starting v) as input. Make sure main has your merged PR as last commit
  6. The release workflow will do the following:
    1. Build all the assets. If this fails (even though we tested in step 4), we haven’t tagged or uploaded anything, you can restart after pushing a fix
    2. Upload to pypi
    3. Create and push the git tag (from pyproject.toml). We create the git tag only here because we can't change it (#4468), so we want to make sure everything up to and including publishing to pypi worked.
    4. Attach artifacts to draft GitHub release
    5. Trigger downstream repositories. This can fail without causing fallout, it is possible (if inconvenient) to trigger the downstream jobs manually
  7. Create release notes in GitHub UI and promote from draft to proper release(https://github.com/charliermarsh/ruff/releases/new)
  8. If needed, update the schemastore
  9. If needed, update ruff-lsp and ruff-vscode

Ecosystem CI

GitHub Actions will run your changes against a number of real-world projects from GitHub and report on any diagnostic differences. You can also run those checks locally via:

python scripts/check_ecosystem.py path/to/your/ruff path/to/older/ruff

You can also run the Ecosystem CI check in a Docker container across a larger set of projects by downloading the known-github-tomls.json as github_search.jsonl and following the instructions in scripts/Dockerfile.ecosystem. Note that this check will take a while to run.

Benchmarking and Profiling

We have several ways of benchmarking and profiling Ruff:

  • Our main performance benchmark comparing Ruff with other tools on the CPython codebase
  • Microbenchmarks which the linter or the formatter on individual files. There run on pull requests.
  • Profiling the linter on either the microbenchmarks or entire projects

CPython Benchmark

First, clone CPython. It's a large and diverse Python codebase, which makes it a good target for benchmarking.

git clone --branch 3.10 https://github.com/python/cpython.git crates/ruff/resources/test/cpython

To benchmark the release build:

cargo build --release && hyperfine --warmup 10 \
  "./target/release/ruff ./crates/ruff/resources/test/cpython/ --no-cache -e" \
  "./target/release/ruff ./crates/ruff/resources/test/cpython/ -e"

Benchmark 1: ./target/release/ruff ./crates/ruff/resources/test/cpython/ --no-cache
  Time (mean ± σ):     293.8 ms ±   3.2 ms    [User: 2384.6 ms, System: 90.3 ms]
  Range (min … max):   289.9 ms … 301.6 ms    10 runs

Benchmark 2: ./target/release/ruff ./crates/ruff/resources/test/cpython/
  Time (mean ± σ):      48.0 ms ±   3.1 ms    [User: 65.2 ms, System: 124.7 ms]
  Range (min … max):    45.0 ms …  66.7 ms    62 runs

Summary
  './target/release/ruff ./crates/ruff/resources/test/cpython/' ran
    6.12 ± 0.41 times faster than './target/release/ruff ./crates/ruff/resources/test/cpython/ --no-cache'

To benchmark against the ecosystem's existing tools:

hyperfine --ignore-failure --warmup 5 \
  "./target/release/ruff ./crates/ruff/resources/test/cpython/ --no-cache" \
  "pyflakes crates/ruff/resources/test/cpython" \
  "autoflake --recursive --expand-star-imports --remove-all-unused-imports --remove-unused-variables --remove-duplicate-keys resources/test/cpython" \
  "pycodestyle crates/ruff/resources/test/cpython" \
  "flake8 crates/ruff/resources/test/cpython"

Benchmark 1: ./target/release/ruff ./crates/ruff/resources/test/cpython/ --no-cache
  Time (mean ± σ):     294.3 ms ±   3.3 ms    [User: 2467.5 ms, System: 89.6 ms]
  Range (min … max):   291.1 ms … 302.8 ms    10 runs

  Warning: Ignoring non-zero exit code.

Benchmark 2: pyflakes crates/ruff/resources/test/cpython
  Time (mean ± σ):     15.786 s ±  0.143 s    [User: 15.560 s, System: 0.214 s]
  Range (min … max):   15.640 s … 16.157 s    10 runs

  Warning: Ignoring non-zero exit code.

Benchmark 3: autoflake --recursive --expand-star-imports --remove-all-unused-imports --remove-unused-variables --remove-duplicate-keys resources/test/cpython
  Time (mean ± σ):      6.175 s ±  0.169 s    [User: 54.102 s, System: 1.057 s]
  Range (min … max):    5.950 s …  6.391 s    10 runs

Benchmark 4: pycodestyle crates/ruff/resources/test/cpython
  Time (mean ± σ):     46.921 s ±  0.508 s    [User: 46.699 s, System: 0.202 s]
  Range (min … max):   46.171 s … 47.863 s    10 runs

  Warning: Ignoring non-zero exit code.

Benchmark 5: flake8 crates/ruff/resources/test/cpython
  Time (mean ± σ):     12.260 s ±  0.321 s    [User: 102.934 s, System: 1.230 s]
  Range (min … max):   11.848 s … 12.933 s    10 runs

  Warning: Ignoring non-zero exit code.

Summary
  './target/release/ruff ./crates/ruff/resources/test/cpython/ --no-cache' ran
   20.98 ± 0.62 times faster than 'autoflake --recursive --expand-star-imports --remove-all-unused-imports --remove-unused-variables --remove-duplicate-keys resources/test/cpython'
   41.66 ± 1.18 times faster than 'flake8 crates/ruff/resources/test/cpython'
   53.64 ± 0.77 times faster than 'pyflakes crates/ruff/resources/test/cpython'
  159.43 ± 2.48 times faster than 'pycodestyle crates/ruff/resources/test/cpython'

You can run poetry install from ./scripts/benchmarks to create a working environment for the above. All reported benchmarks were computed using the versions specified by ./scripts/benchmarks/pyproject.toml on Python 3.11.

To benchmark Pylint, remove the following files from the CPython repository:

rm Lib/test/bad_coding.py \
  Lib/test/bad_coding2.py \
  Lib/test/bad_getattr.py \
  Lib/test/bad_getattr2.py \
  Lib/test/bad_getattr3.py \
  Lib/test/badcert.pem \
  Lib/test/badkey.pem \
  Lib/test/badsyntax_3131.py \
  Lib/test/badsyntax_future10.py \
  Lib/test/badsyntax_future3.py \
  Lib/test/badsyntax_future4.py \
  Lib/test/badsyntax_future5.py \
  Lib/test/badsyntax_future6.py \
  Lib/test/badsyntax_future7.py \
  Lib/test/badsyntax_future8.py \
  Lib/test/badsyntax_future9.py \
  Lib/test/badsyntax_pep3120.py \
  Lib/test/test_asyncio/test_runners.py \
  Lib/test/test_copy.py \
  Lib/test/test_inspect.py \
  Lib/test/test_typing.py

Then, from crates/ruff/resources/test/cpython, run: time pylint -j 0 -E $(git ls-files '*.py'). This will execute Pylint with maximum parallelism and only report errors.

To benchmark Pyupgrade, run the following from crates/ruff/resources/test/cpython:

hyperfine --ignore-failure --warmup 5 --prepare "git reset --hard HEAD" \
  "find . -type f -name \"*.py\" | xargs -P 0 pyupgrade --py311-plus"

Benchmark 1: find . -type f -name "*.py" | xargs -P 0 pyupgrade --py311-plus
  Time (mean ± σ):     30.119 s ±  0.195 s    [User: 28.638 s, System: 0.390 s]
  Range (min … max):   29.813 s … 30.356 s    10 runs

Microbenchmarks

The ruff_benchmark crate benchmarks the linter and the formatter on individual files.

You can run the benchmarks with

cargo benchmark

Benchmark driven Development

Ruff uses Criterion.rs for benchmarks. You can use --save-baseline=<name> to store an initial baseline benchmark (e.g. on main) and then use --benchmark=<name> to compare against that benchmark. Criterion will print a message telling you if the benchmark improved/regressed compared to that baseline.

# Run once on your "baseline" code
cargo benchmark --save-baseline=main

# Then iterate with
cargo benchmark --baseline=main

PR Summary

You can use --save-baseline and critcmp to get a pretty comparison between two recordings. This is useful to illustrate the improvements of a PR.

# On main
cargo benchmark --save-baseline=main

# After applying your changes
cargo benchmark --save-baseline=pr

critcmp main pr

You must install critcmp for the comparison.

cargo install critcmp

Tips

  • Use cargo benchmark <filter> to only run specific benchmarks. For example: cargo benchmark linter/pydantic to only run the pydantic tests.
  • Use cargo benchmark --quiet for a more cleaned up output (without statistical relevance)
  • Use cargo benchmark --quick to get faster results (more prone to noise)

Profiling Projects

You can either use the microbenchmarks from above or a project directory for benchmarking. There are a lot of profiling tools out there, The Rust Performance Book lists some examples.

Linux

Install perf and build ruff_benchmark with the release-debug profile and then run it with perf

cargo bench -p ruff_benchmark --no-run --profile=release-debug && perf record --call-graph dwarf -F 9999 cargo bench -p ruff_benchmark --profile=release-debug -- --profile-time=1

You can also use the ruff_dev launcher to run ruff check multiple times on a repository to gather enough samples for a good flamegraph (change the 999, the sample rate, and the 30, the number of checks, to your liking)

cargo build --bin ruff_dev --profile=release-debug
perf record -g -F 999 target/release-debug/ruff_dev repeat --repeat 30 --exit-zero --no-cache path/to/cpython > /dev/null

Then convert the recorded profile

perf script -F +pid > /tmp/test.perf

You can now view the converted file with firefox profiler, with a more in-depth guide here

An alternative is to convert the perf data to flamegraph.svg using flamegraph (cargo install flamegraph):

flamegraph --perfdata perf.data

Mac

Install cargo-instruments:

cargo install cargo-instruments

Then run the profiler with

cargo instruments -t time --bench linter --profile release-debug -p ruff_benchmark -- --profile-time=1
  • -t: Specifies what to profile. Useful options are time to profile the wall time and alloc for profiling the allocations.
  • You may want to pass an additional filter to run a single test file

Otherwise, follow the instructions from the linux section.