New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automate Benchmarks & Run in CI #221
Comments
Yep, would love to have something like that. |
OpenGL testing on Travis et al is a bit problematic. You might be able to set somethig up with Xvfb / OSMesa. |
Just as a note, I've tested out automated benchmarking using I believe the best bet for automatically comparing Alacritty against itself would be to run headless benchmarks which do not exercise any hardware acceleration, which would make it easy to run these on a server. This wouldn't allow comparison against other terminal emulators, but it seems like the best choice to me. |
Have you tried using xdummy? I did a bit of research and it seems to be a more recent attempt. |
Looking at the performance difference on that page, I don't think this would improve anything. I don't think anything that doesn't just ignore all render calls will be able to process Alacritty's output without GPU acceleration. |
Is there any reason why just ignoring all render calls wouldn't work? The benchmarks shouldn't need the actual rendering to be verified. |
In theory there should not be any problem with that. However I'm not certain that there wouldn't be any complications. |
See https://github.com/alacritty/termbenchbot. A follow up for more features can be found here: alacritty/termbenchbot#1. |
Is it possible to include a shell script to run benchmarks against other terminals that could be automated at CI time to catch speed regressions?
The text was updated successfully, but these errors were encountered: