Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
To make sure that performance in Alacritty is consistent,
it has been requested in #221 to add automated benchmarks which
can catch performance regressions with pull requests.
This makes use of Travis' webhooks to automatically start the
benchmarking process on a webserver I've hosted on my personal
VPS for now. The benchmarking code can be found here:
https://github.com/chrisduerr/alacritty-perf
All Alacritty tests have been moved to criterion, which is also what
is used in the server to test the performance of Alacritty. Any test
added to Alacritty will be automatically added to the automated
benchmarks without any additional setup required.
I've gone through a bunch of testing trying out different variations
to automate Alacritty's benchmarks. That includes using xvfb,
Rust's native benchmarks and criterion.rs and I've tested all of these
both on Travis directly and my VPS.
The conclusion of my tests was that xvfb with software rendering using
the
vtebench
tool is infeasible because the rendering is so slow thatit becomes the sole performance bottleneck and other code changes are
completely ignored. Using Rust's built-in benchmarks was successful on
my VPS, however they're lacking dedicated setup methods. While
criterion.rs was far more inconsistent in the beginning, with some slight
tweaks it was possible to get closer to Rust's built-in benchmarks while
still providing a setup method.
As an example the results of my testing can be found here:
https://perf.christianduerr.com
Note that the big performance spike was an intentional regression
introduced to test the process. The code changed can be found
here:
https://github.com/jwilm/alacritty/pull/1369/files#diff-1130fcb67aac96a3d3e47407ca7a49f1R200
Currently this is still using a patched version of criterion, but I'll try
to send a PR to upstream these changes.
This fixes #221.