Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup automated benchmarks #1653

Closed
wants to merge 7 commits into from

Conversation

chrisduerr
Copy link
Member

To make sure that performance in Alacritty is consistent,
it has been requested in #221 to add automated benchmarks which
can catch performance regressions with pull requests.

This makes use of Travis' webhooks to automatically start the
benchmarking process on a webserver I've hosted on my personal
VPS for now. The benchmarking code can be found here:
https://github.com/chrisduerr/alacritty-perf

All Alacritty tests have been moved to criterion, which is also what
is used in the server to test the performance of Alacritty. Any test
added to Alacritty will be automatically added to the automated
benchmarks without any additional setup required.

I've gone through a bunch of testing trying out different variations
to automate Alacritty's benchmarks. That includes using xvfb,
Rust's native benchmarks and criterion.rs and I've tested all of these
both on Travis directly and my VPS.

The conclusion of my tests was that xvfb with software rendering using
the vtebench tool is infeasible because the rendering is so slow that
it becomes the sole performance bottleneck and other code changes are
completely ignored. Using Rust's built-in benchmarks was successful on
my VPS, however they're lacking dedicated setup methods. While
criterion.rs was far more inconsistent in the beginning, with some slight
tweaks it was possible to get closer to Rust's built-in benchmarks while
still providing a setup method.

As an example the results of my testing can be found here:
https://perf.christianduerr.com

Note that the big performance spike was an intentional regression
introduced to test the process. The code changed can be found
here:
https://github.com/jwilm/alacritty/pull/1369/files#diff-1130fcb67aac96a3d3e47407ca7a49f1R200

Currently this is still using a patched version of criterion, but I'll try
to send a PR to upstream these changes.

This fixes #221.

The criterion.rs crate allows setup methods for each iteration. This
allows doing more than what the traditional Rust bench harness is
capable off.

All existing benches have been ported over and one additional test has
been added.
@chrisduerr
Copy link
Member Author

I'm currently working on a different approach for this. Hopefully it will turn out a bit more reliable.

@chrisduerr chrisduerr closed this Apr 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Automate Benchmarks & Run in CI
1 participant