Fork me on GitHub


Quantum Toolbox in Python

QuTiP benchmarks

QuTiP benchmarks are run nightly on GitHub Actions. Two different kinds of plots are produced -- historical and scaling.

Historical plots show the changes in benchmark performance over the last month.

Scaling plots show the lastest benchmark performance as a function of the size of the data. The data might be a Qobj, QobjEvo or a system Hamiltonian.

Where possible, we compare QuTiP performance to the equivalent raw numpy and SciPy operations.

You can view the plots by clicking on the links below or using the navigation bar above.

Historical plots

Scaling plots

Benchmarking process

Benchmarking on GitHub Actions comes with some challenges. GitHub Action runners are randomly assigned and the underlying hardware is not consistent. We account for this by showing separate lines on each plot for the most commonly provided CPUs.

The downside is that the plots are more cluttered, but the upside is that they give a clearer picture of performance across a range of typical CPUs used in cloud data centres.


The benchmarks are maintained in the qutip-benchmark repository. Suggestions for new benchmarks or bug reports are welcome (please open an issue or pull reuqest).

If you or your company would like to support the benchmarking effort QuTiP would welcome contributions towards running benchmarks, either by offering to pay for running benchmarks on additional cloud instances (e.g. machines with better CPUs, more RAM, GPUs) or by offering to run benchmarks on dedicated hardware you have available. If this interests you, please email the qutip-admin mailing list.