Skip to content
Generate benchmarks for terminal emulators
Rust Shell
Branch: master
Clone or download

Latest commit

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
ci
src Add travis CI Jan 11, 2020
.gitignore Initial commit of vtebench Dec 25, 2017
.travis.yml Add travis CI Jan 11, 2020
Cargo.lock Fix incorrect asrw/unicode output width Jan 11, 2020
Cargo.toml Update dependencies (#9) Sep 20, 2018
LICENSE-APACHE Add license Dec 25, 2017
README.md Add more detailed instructions Jan 11, 2020
generate-benchmarks.sh Fix benchmark script error with BSD seq Feb 10, 2020
rustfmt.toml Add travis CI Jan 11, 2020

README.md

vtebench

A tool for generating terminal benchmarks

Usage

The general usage pattern is

vtebench -w $(tput cols) -h $(tput lines) [-c|-b=BYTES|-t=TERM] <benchmark>

Terminal protocol will be output to stdout. Output must be directed into a file rather than used directly to benchmark. vtebench is written for ease of understanding, not performance.

To generate the most basic commands, the generate-benchmarks.sh script can be used. This should be run in the project's root directory and will output the benchmark files to target/benchmarks.

After the files have been generated, the performance can be measured with perf, or hyperfine on macOS or Windows:

perf stat -r 10 cat target/benchmarks/alt-screen-random-write.vte
hyperfine --show-output "cat target/benchmarks/scrolling.vte"

Great instructions on how to reliably generate consistent benchmarks can be found in the llvm documentation. Usually it is not required to limit execution to specific cores, but the other instructions will greatly help with consistency.

The -b|--bytes flag

It's important to generate sufficient output to test the terminal. If the test only takes 1ms to complete, you lack statistical significance. As a guideline, time cat <script> should take at least 1 second. How much data is needed to get there will vary greatly by terminal.

Contributing

If you wish to add a new test, do the following:

  1. Add a new function in bench.rs with the same pattern as an existing function.
  2. Add a subcommand to run it in the Benchmark enum within cli.rs.
  3. Handle the subcommand in main.rs.

If there are escape codes that are not yet supported on Context it is quite helpful to reference the terminfo man page and cross reference with the terminfo crate's capability submodule documentation. Each capability name has a corresponding type in that submodule.

You can’t perform that action at this time.