To run the benchmarks first clone and install QuTiP (this plugin is meant to run with qutip v5 which is not realeased yet).
pip install git+https://email@example.com
You will then want to install this repo by cloning it from github.
The benchmarks so far are two different types, basic operations such as matrix multiplication or addition and solvers.
The operations are benchmarked using 3 parameters:
- Density: either sparse or dense, matrices and vectors are created usning
qutip.rand_herm(size, density=1/size)for sparse operators and
density=1for dense. Sparse kets ase using
- Size: 2**N with N going from 1 to 9. (2 to 512)
- dtype (or coeftype fro QobjEvo operations).
Solvers are benchmarked using 2 fixtures:
- Size 2**N with N going from 2 to 7. (4 to 128)
- The models used are common models such as a cavity, the Jaynes-Cummings model and a qubit spin chain.
Running the benchmarks
to run the benchmarks use the following command from the root of the repository:
This will store the resulting data and figures in the folder
The benchmarks consist on a set of operations, such as matrix multiplication,
and solvers, such as MeSolve.
The benchmarks run the same operations for different hermitian matrix sizes
that can either be dense or sparse (tridiagonal). The script also includes
a few other options.
You can get a description of the arguments with
python benchmarks/benchmarks.py --help or
see the Pytest documentation
and Pytest-Benchmark documetation
for all command line flags.
python -m qutip_benchmarks.cli.run_benchmarks -k "test_linear_algebra" --collect-only:
Shows all the available benchmarks. Useful to filter them with the
python -m qutip_benchmarks.cli.run_benchmarks -k "matmul": Runs only the benchmarks for
Viewing the benchmarks
The default method to view the benchmarks is by using:
python -m qutip_benchmarks.cli.view_benchmarks
This will plot the benchmarks in an identical manner to what is found on the Qutip's benchmark website.
This scipts accepts 4 flags:
||By default separates nightly and scaling into
||Path to folder in which the benchmarks are stored|
||Only plot scaling (time vs matirx size) from the latest benchmark file|
||Plot the performance over time using results from all the benchmark files|
If you wish to have more control on what to plot you can import view_utilities.py to a python script and use the available functions.
These functions all contain an extensive description of the accepted parameters and outputs can be used for which you can find in
benchmarks/view_utilities.py so only a brief description will be given here, you can also view a use case example in this tutorial.
||accepts a path to one folder and creates a dataframe with only the information required to produce the plots.|
||accepts the path to the folder containin the results the benchmarks you have run and returns a list of paths to each file conatined within ordered by date.|
||accepts a list of paths (produced by
||accepts the dataframe and sort the information by operation and allows you to filter out certain operation if you do not wish to plot them. It outputs a dictionnary with the operation as the key and a dataframe of the corresponding information as the value.|
||accepts the dictionnary produced by
||Accepts the dictionnary produced by
We are proud to be affiliated with Unitary Fund and NumFOCUS. QuTiP development is supported by Nori's lab at RIKEN, by the University of Sherbrooke, and by Aberystwyth University, among other supporting organizations. The update of this project was sponsored by Google Summer of Code 2022.