Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking tools #27

Closed
mpizenberg opened this issue Oct 9, 2020 · 1 comment
Closed

Benchmarking tools #27

mpizenberg opened this issue Oct 9, 2020 · 1 comment

Comments

@mpizenberg
Copy link
Member

Performance improvements will need some benchmarking. Things are being setup for that, but for the time being we have one big registry generated for tests, that has been committed in the repository. So I'll use this one as support for the commands listed below.

Usually, performance improvement work goes uses a cycle like this one

  1. Measure
  2. Profile
  3. Optimize
  4. back to 1.

To measure timing, we can setup benchmarks. Criterion is good for micro-benchmark and supported on stable. The default cargo-bench is good enough for bigger benchmarks but requires nightly. We'll use the later for our big registry.
Profiling enables a more granular view of which functions take the most time. We usually profile both CPU and memory. Once the culprit have been identified, it is time to do some optimization and back to measure!

Measure

First we need to add debug symbols to our compilation profiles, so that the binary we generate for benchmarks can be profiled later. So we add the following in Cargo.toml.

# in Cargo.toml
[profile.release]
debug = true

[profile.bench]
debug = true

Then, we will be running the large_case benchmark available in the repo. It needs serde to load the registry configuration which is stored as data in a Ron file.

cargo +nightly bench large_case --features=serde

It will print to stdout the name of the generated executable for the benchmark, something like:

     Running target/release/deps/large_case-96ac47771292fd35

It will also print the timings results, we should note them to see if we can make improvements later.

Profiling

Using flamegraph

flamegraph -o flamegraph.svg target/release/deps/large_case-96ac47771292fd35

pubgrub-flamegraph

Using perf

sudo perf record target/release/deps/large_case-96ac47771292fd35
sudo perf report

pubgrub-perf

Using valgrind + callgrind + KCachegrind

valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes \
    --simulate-cache=yes target/release/deps/large_case-96ac47771292fd35

That generates a callgrind.out.<pid> file that we can then open with KCachegrind

pubgrub-kcachegrind

@Eh2406
Copy link
Member

Eh2406 commented Nov 17, 2020

This link was just announced and I am very excited to read it. https://github.com/nnethercote/perf-book

konstin added a commit that referenced this issue Oct 23, 2024
This wrapper avoids accessing the `incompatibility_store` directly in uv
code.

Before:

```rust
let dep_incompats = self.pubgrub.add_version(
    package.clone(),
    version.clone(),
    dependencies,
);
self.pubgrub.partial_solution.add_version(
    package.clone(),
    version.clone(),
    dep_incompats,
    &self.pubgrub.incompatibility_store,
);
```

After:

```rust
self.pubgrub.add_incompatibility_from_dependencies(package.clone(), version.clone(), dependencies);
```

`add_incompatibility_from_dependencies` is one of the main methods for
the custom interface to pubgrub.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants