Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate Benchmarks & Run in CI #221

Closed
tbillington opened this issue Jan 8, 2017 · 8 comments
Closed

Automate Benchmarks & Run in CI #221

tbillington opened this issue Jan 8, 2017 · 8 comments

Comments

@tbillington
Copy link

Is it possible to include a shell script to run benchmarks against other terminals that could be automated at CI time to catch speed regressions?

@jwilm
Copy link
Contributor

jwilm commented Jan 8, 2017

Yep, would love to have something like that.

@theduke
Copy link

theduke commented Jan 8, 2017

OpenGL testing on Travis et al is a bit problematic.

You might be able to set somethig up with Xvfb / OSMesa.

@jwilm jwilm changed the title Is it possible to add benchmarks Automate Benchmarks Sep 18, 2018
@chrisduerr
Copy link
Member

Just as a note, I've tested out automated benchmarking using Xvfb and it was not possible to get any useful results out of the alt-screen-random-write benchmark of vtebench because Xvfb was bottlenecking Alacritty.

I believe the best bet for automatically comparing Alacritty against itself would be to run headless benchmarks which do not exercise any hardware acceleration, which would make it easy to run these on a server. This wouldn't allow comparison against other terminal emulators, but it seems like the best choice to me.

@jwilm jwilm changed the title Automate Benchmarks Automate Benchmarks & Run in CI Oct 5, 2018
@zacps
Copy link
Contributor

zacps commented Oct 25, 2018

Have you tried using xdummy? I did a bit of research and it seems to be a more recent attempt.

@chrisduerr
Copy link
Member

Looking at the performance difference on that page, I don't think this would improve anything.

I don't think anything that doesn't just ignore all render calls will be able to process Alacritty's output without GPU acceleration.

@zacps
Copy link
Contributor

zacps commented Oct 25, 2018

Is there any reason why just ignoring all render calls wouldn't work? The benchmarks shouldn't need the actual rendering to be verified.

@chrisduerr
Copy link
Member

In theory there should not be any problem with that. However I'm not certain that there wouldn't be any complications.

@chrisduerr
Copy link
Member

chrisduerr commented Sep 24, 2020

See https://github.com/alacritty/termbenchbot.

A follow up for more features can be found here: alacritty/termbenchbot#1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

5 participants