Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add: Benchmarking with Criterion #53

Merged
merged 2 commits into from
Oct 23, 2023

Conversation

RaulTrombin
Copy link
Member

A fast way to benchmark our functions, seems work pretty fine locally and on hosted-ci-runner.

cargo bench --jobs 1 --bench bench | tee output.txt

new                     time:   [190.92 ms 190.98 ms 191.04 ms]
Found 12 outliers among 100 measurements (12.00%)
  8 (8.00%) low severe
  2 (2.00%) high mild
  2 (2.00%) high severe

init                    time:   [328.32 ms 328.46 ms 328.60 ms]
Found 16 outliers among 100 measurements (16.00%)
  1 (1.00%) low severe
  1 (1.00%) low mild
  1 (1.00%) high mild
  13 (13.00%) high severe

read_adc                time:   [9.9898 ms 10.011 ms 10.030 ms]
Found 19 outliers among 100 measurements (19.00%)
  13 (13.00%) low severe
  4 (4.00%) low mild
  2 (2.00%) high severe

read_adc_all            time:   [39.829 ms 39.941 ms 40.044 ms]
Found 21 outliers among 100 measurements (21.00%)
  19 (19.00%) low severe
  1 (1.00%) low mild
  1 (1.00%) high severe

read_accel              time:   [219.62 µs 221.09 µs 222.62 µs]
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe

read_gyro               time:   [220.10 µs 221.02 µs 221.97 µs]
Found 13 outliers among 100 measurements (13.00%)
  8 (8.00%) low mild
  1 (1.00%) high mild
  4 (4.00%) high severe

read_mag                time:   [80.236 ms 87.279 ms 94.314 ms]
Found 22 outliers among 100 measurements (22.00%)
  15 (15.00%) low severe
  6 (6.00%) low mild
  1 (1.00%) high mild

read_pressure           time:   [1.3235 ms 1.3275 ms 1.3307 ms]
Found 17 outliers among 100 measurements (17.00%)
  12 (12.00%) low severe
  2 (2.00%) low mild
  2 (2.00%) high mild
  1 (1.00%) high severe

read_temperature        time:   [655.14 µs 658.95 µs 662.38 µs]
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high severe

read_all                time:   [44.203 ms 44.341 ms 44.467 ms]
Found 16 outliers among 100 measurements (16.00%)
  16 (16.00%) low severe

pwm_enable              time:   [39.916 µs 40.393 µs 40.864 µs]
Found 4 outliers among 100 measurements (4.00%)
  3 (3.00%) high mild
  1 (1.00%) high severe

set_pwm_channel_value   time:   [124.33 µs 125.26 µs 125.98 µs]

set_pwm_freq_hz         time:   [140.49 µs 142.74 µs 145.37 µs]

set_pwm_freq_prescale   time:   [157.34 µs 160.03 µs 162.46 µs]
Found 17 outliers among 100 measurements (17.00%)
  10 (10.00%) low severe
  5 (5.00%) high mild
  2 (2.00%) high severe

set_neopixel            time:   [259.98 µs 260.94 µs 262.19 µs]
Found 10 outliers among 100 measurements (10.00%)
  1 (1.00%) low mild
  1 (1.00%) high mild
  8 (8.00%) high severe

set_led                 time:   [39.801 µs 40.025 µs 40.246 µs]
Found 15 outliers among 100 measurements (15.00%)
  8 (8.00%) low mild
  3 (3.00%) high mild
  4 (4.00%) high severe

set_led_toggle          time:   [86.171 µs 86.801 µs 87.462 µs]
Found 6 outliers among 100 measurements (6.00%)
  4 (4.00%) high mild
  2 (2.00%) high severe

get_led                 time:   [45.210 µs 45.947 µs 46.941 µs]
Found 15 outliers among 100 measurements (15.00%)
  9 (9.00%) low mild
  3 (3.00%) high mild
  3 (3.00%) high severe

Copy link
Member

@patrickelectric patrickelectric left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tests does not check if the function calls are failing, we may fail fast without knowing. So the benchmark does not make much sense.
We need to check if there are no errors.

@RaulTrombin
Copy link
Member Author

The tests does not check if the function calls are failing, we may fail fast without knowing. So the benchmark does not make much sense. We need to check if there are no errors.

Our functions are returning the types directly, not a result, but internally they are unwrapping() inner results, and seems to stop if reach any error.

example:

`    pub fn read_test(&mut self) -> f32 {
        self.test_error().unwrap()
    }

    pub fn test_error(&mut self) -> Result<f32, std::io::Error> {
        Err(std::io::Error::new(std::io::ErrorKind::Other, "deu ruim"))
    }`

Running bench:

blueos@raspberrypi-armv7:~/github/navigator-rs $ cargo bench Compiling navigator-rs v0.2.1 (/home/blueos/github/navigator-rs) warning: unused imports: AdcChannel, PwmChannel, UserLed--> benches/bench.rs:2:20 | 2 | use navigator_rs::{AdcChannel, Navigator, PwmChannel, UserLed}; | ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^ | = note:#[warn(unused_imports)]` on by default

warning: `navigator-rs` (bench "bench") generated 1 warning (run `cargo fix --bench "bench"` to apply 1 suggestion)
    Finished bench [optimized] target(s) in 4.10s
     Running unittests src/lib.rs (target/release/deps/navigator_rs-22e31adf3b248ca7)

   Running benches/bench.rs (target/release/deps/bench-bffbedc8b74e0a61)
Gnuplot not found, using plotters backend
Benchmarking read_test: Warming up for 3.0000 sthread 'main' panicked at src/lib.rs:806:27:
called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: "deu ruim" }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

error: bench failed, to rerun pass `--bench bench``

@patrickelectric
Copy link
Member

patrickelectric commented Oct 23, 2023

The tests does not check if the function calls are failing, we may fail fast without knowing. So the benchmark does not make much sense. We need to check if there are no errors.

Our functions are returning the types directly, not a result, but internally they are unwrapping() inner results, and seems to stop if reach any error.

example:

`    pub fn read_test(&mut self) -> f32 {
        self.test_error().unwrap()
    }

    pub fn test_error(&mut self) -> Result<f32, std::io::Error> {
        Err(std::io::Error::new(std::io::ErrorKind::Other, "deu ruim"))
    }`

Running bench:

blueos@raspberrypi-armv7:~/github/navigator-rs $ cargo bench Compiling navigator-rs v0.2.1 (/home/blueos/github/navigator-rs) warning: unused imports: AdcChannel, PwmChannel, UserLed--> benches/bench.rs:2:20 | 2 | use navigator_rs::{AdcChannel, Navigator, PwmChannel, UserLed}; | ^^^^^^^^^^ ^^^^^^^^^^ ^^^^^^^ | = note:#[warn(unused_imports)]` on by default

warning: `navigator-rs` (bench "bench") generated 1 warning (run `cargo fix --bench "bench"` to apply 1 suggestion)
    Finished bench [optimized] target(s) in 4.10s
     Running unittests src/lib.rs (target/release/deps/navigator_rs-22e31adf3b248ca7)

   Running benches/bench.rs (target/release/deps/bench-bffbedc8b74e0a61)
Gnuplot not found, using plotters backend
Benchmarking read_test: Warming up for 3.0000 sthread 'main' panicked at src/lib.rs:806:27:
called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: "deu ruim" }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

error: bench failed, to rerun pass `--bench bench``

The library functions should return result and not unwrap inside it.
The error handle should be passed to the users.
I do believe that we have already discussed that.

@patrickelectric
Copy link
Member

We can merge at it is, but we should fix that.

@patrickelectric patrickelectric merged commit 47c08c6 into bluerobotics:master Oct 23, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants