You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Performance improvements will need some benchmarking. Things are being setup for that, but for the time being we have one big registry generated for tests, that has been committed in the repository. So I'll use this one as support for the commands listed below.
Usually, performance improvement work goes uses a cycle like this one
Measure
Profile
Optimize
back to 1.
To measure timing, we can setup benchmarks. Criterion is good for micro-benchmark and supported on stable. The default cargo-bench is good enough for bigger benchmarks but requires nightly. We'll use the later for our big registry.
Profiling enables a more granular view of which functions take the most time. We usually profile both CPU and memory. Once the culprit have been identified, it is time to do some optimization and back to measure!
Measure
First we need to add debug symbols to our compilation profiles, so that the binary we generate for benchmarks can be profiled later. So we add the following in Cargo.toml.
Then, we will be running the large_case benchmark available in the repo. It needs serde to load the registry configuration which is stored as data in a Ron file.
cargo +nightly bench large_case --features=serde
It will print to stdout the name of the generated executable for the benchmark, something like:
This wrapper avoids accessing the `incompatibility_store` directly in uv
code.
Before:
```rust
let dep_incompats = self.pubgrub.add_version(
package.clone(),
version.clone(),
dependencies,
);
self.pubgrub.partial_solution.add_version(
package.clone(),
version.clone(),
dep_incompats,
&self.pubgrub.incompatibility_store,
);
```
After:
```rust
self.pubgrub.add_incompatibility_from_dependencies(package.clone(), version.clone(), dependencies);
```
`add_incompatibility_from_dependencies` is one of the main methods for
the custom interface to pubgrub.
Performance improvements will need some benchmarking. Things are being setup for that, but for the time being we have one big registry generated for tests, that has been committed in the repository. So I'll use this one as support for the commands listed below.
Usually, performance improvement work goes uses a cycle like this one
To measure timing, we can setup benchmarks. Criterion is good for micro-benchmark and supported on stable. The default cargo-bench is good enough for bigger benchmarks but requires nightly. We'll use the later for our big registry.
Profiling enables a more granular view of which functions take the most time. We usually profile both CPU and memory. Once the culprit have been identified, it is time to do some optimization and back to measure!
Measure
First we need to add debug symbols to our compilation profiles, so that the binary we generate for benchmarks can be profiled later. So we add the following in
Cargo.toml
.Then, we will be running the
large_case
benchmark available in the repo. It needs serde to load the registry configuration which is stored as data in aRon
file.It will print to stdout the name of the generated executable for the benchmark, something like:
It will also print the timings results, we should note them to see if we can make improvements later.
Profiling
Using flamegraph
Using perf
Using valgrind + callgrind + KCachegrind
That generates a
callgrind.out.<pid>
file that we can then open with KCachegrindThe text was updated successfully, but these errors were encountered: