Results of the Python performance benchmark suite computed on the speed-python server: compressed perf JSON files.
Files are distributed under the MIT license (see
2017-04-13-pypy: PyPy result, pypy2_571_warmups.json.gz file
- PyPy2 5.7.1 (revision 1aa2d8e03cdf): 64-bit, static binary
- perf 1.2 (dev), performance 0.5.5 (dev)
- performance hacked to run exactly 10 worker processes, each computes 250 values with 0 warmup
- Date: 2017-04-13 22:19 - 2017-04-14 09:01 (10h45)
- 67 benchmarks
2017-04-12-cpython: CPython results computed since 2017-04-12. CPython 2.7 is now configured with
--enable-unicode=ucs4, performance 0.5.5 (dev).
2017-03-31-cpython: CPython results computed since 2017-03-31. CPython compiled with LTO and PGO. CPython now uses a Git repository.
2016-10-01_01-17-master-78a111c7d867.json.gz: oldest file (start: 2017-03-31), use performance 0.5.4 (dev) and perf 1.1 (dev)
2016-12-28-cpython: CPython results computed on the period 2016-12-28..2017-02-10 (2 months). CPython compiled with LTO and PGO. CPython used a Mercurial repository.
2016-12-27_17-59-default-fa9933bf4ea0.json.gz: oldest file (start: 2016-12-28), use performance 0.5.0 and perf 0.9.0
2017-02-10_00-20-default-e91ec62da088.json.gz: newest file (start: 2017-02-10), use performance 0.5.1 and perf 0.9.3
Example of filename:
2016-01-01: date of the commit (year-month-day)
10-25: time of the commit with 24-hour format (hour-minute)
default: Git or Mercurial branch name
0b8ff5216100: commit identifier
.json.gz: File extension, JSON compressed by gzip
perf stats command to get the start/end dates when the benchmark was
- 2017-04-12: CPython 2.7 is now configured with
- 2017-03-31: old results removed, new CPython results to use Git commits instead of Mercurial.
- 2017-03-17: perf 1.0 released
- 2017-01: old results computed without PGO removed (unstable because of code placement), new CPython results using PGO
- 2016-12: speed-python server upgraded to Ubuntu 16.04
- 2016-11-04: old results computed with
benchmarksremoved, new CPython results (using LTO but not PGO) computed with the new
- 2016-08-24: performance 0.1 released
- 2016-06-02: perf 0.1 released
- 2016: speed.python.org uses the
benchmarksproject created in December 2008 by Collin Winter and Jeffrey Yasskin for the Unladen Swallow project. The project was hosted at https://hg.python.org/benchmarks until Feb 2016
Specification of the speed-python server used to run benchmarks.
- 2 HP DL380 G7 Intel® Xeon® X5680 (3.33GHz/6-core/130W/12MB) FIO Processor Kit
- 2 physical CPUs of 12 cores each: total: 24 cores
- Intel(R) Xeon(R) CPU X5680 @ 3.33GHz,
- Memory: 4x 4GB (1x4GB) Dual Rank x4 PC3-10600 (DDR3-1333) Registered CAS-9 Memory Kit
- OS: Ubuntu 16.04.1 LTS
- Kernel: GNU/Linux 4.4.0-47-generic
Benchmarks are run on the NUMA node 1 (second CPU), Linux kernel options:
- speed.python.org: CPython benchmark results
- speed.pypy.org: PyPy benchmark results
- speed.pyston.org: Pyston benchmark results
- 2017-04-06: [pyperformance] CPython results, 2017. CPython results and analyze of most significant optimizations and slowdowns.
- 2017-03-29: speed.python.org results: March 2017, screenshots of the most interesting benchmarks before removing CPython results using Mercurial commits.
- 2017-01-02: [Speed] speed.python.org: recent issues to run benchmarks. Old results were removed, benchmarks now run with LTO+PGO on Ubuntu 16.04.
- 2016-11-04: [Speed] New benchmarks results on speed.python.org. Benchmarks run on Ubuntu 14.04 with LTO but without PGO. Use NUMA node 1 with CPU isolation, Turbo Boost disabled on the isolated CPUs, fixed CPU frequency (3.3 GHz). Results were lost (removed without backup).