Skip to content
Alice Zoë Bevan–McGregor edited this page Aug 16, 2021 · 20 revisions

Engines

These are the official, normalized benchmark results generated using the benchmark.py and sloccount tools.

Engine Version Python Technology Package SLoC Engine SLoC
bottle 0.12.9 2 1, 2 2, 736 2,736
chameleon 2.24 2, 3 1, 2, 3 17,604 17,604
cheetah 2.4.4 C, 2 1, 2, 3 9,403 9,403
cinje 1.0 2, 3 4 965 965
django 1.8.8 2, 3 1, 2, 3 73,757 3,656
jinja2 2.8 2, 3 1, 2, 3 6,438 6,438
mako 1.0.3 2, 3 1, 2, 3 5,026 5,026
tenjin 1.1.1 2, 3 1, 2 1,374 1,374
tornado 4.3 C, 2, 3 1 22,839 542
web2py 2.1.1 2 1, 2, 3 54,866 413
wheezy.template 0.1.167 2, 3 1, 2, 3 1,039 1,039
genshi 0.7.1 2, 3 1, 3, 4 16,217 14,623
  1. Class-Based Usage
  2. Regular Expressions
  3. Complete Parser/Compiler/AST
  4. Streaming String Manipulation

Big Table

The big table test emits a 1000 row by 10 column table generated from a concrete list of dictionaries. Results are broken down by runtime environment; not all engines are available under every version of Python. Benchmarks are run with MarkupSafe optimizations in place, as this would represent a typical production deployment environment.

Variants marked as unsafe have escaping unceremoniously disabled. Variants marked as all and first represent streaming results instead of monolithic buffered results. Variants marked first only count the time to first iteration or "responsiveness", whereas all represents total iteration.

Python 2.7

Python 2.7 Engine msec/gen gen/sec total calls unique calls
bottle 33.36 29.97 163,016 23
chameleon 31.12 32.13 161,031 24
cheetah 71.98 13.89 225,019 22
cinje 24.17 41.38 84,010 15
cinje_fancy_all 25.14 39.77 88,038 24
cinje_fancy_first 0.03 36,805.05 92 16
cinje_flush_all 24.70 40.49 84,030 15
cinje_flush_first 0.03 38,775.11 92 16
cinje_unsafe 5.87 170.38 14,010 11
jinja2 19.72 50.71 60,018 26
list_append 11.37 87.95 93,007 10
list_extend 10.81 92.54 53,007 10
mako 18.81 53.17 93,035 36
tenjin 20.30 49.27 123,010 15
tenjin_unsafe 6.57 152.11 13,010 11
tornado 34.28 29.17 233,021 23
web2py 72.87 13.72 295,015 20
wheezy.template 12.63 79.17 93,009 12

Pypy 4.0.0 (2.7.10)

As a note, some C optimizations could not be compiled for this environment. Cheetah is especially harmed by this.

Pypy Engine msec/gen gen/sec total calls unique calls
chameleon 9.61 104.07 193,036 25
cheetah 23.41 42.71 755,039 33
cinje 11.91 83.97 153,010 16
cinje_fancy_all 9.99 100.12 157,038 25
cinje_fancy_first 0.02 44,402.96 171 25
cinje_flush_all 9.36 106.86 153,030 16
cinje_flush_first 0.02 45,679.63 160 17
cinje_unsafe 8.34 119.87 13,010 10
jinja2 11.32 88.35 123,018 27
list_append 6.25 160.02 93,007 10
list_extend 7.93 126.08 53,007 10
mako 7.43 134.58 153,038 40
tenjin 9.58 104.34 123,010 15
tenjin_unsafe 6.57 152.10 13,010 11
tornado 11.74 85.18 233,021 23
wheezy.template 6.45 155.12 93,009 12

Python 3.4

Python 3.4 Engine msec/gen gen/sec total calls unique calls
chameleon 33.09 30.22 182,033 25
cinje 24.64 36.18 84,011 16
cinje_fancy_all 26.46 37.79 88,038 24
cinje_fancy_first 0.03 35,651.58 104 24
cinje_flush_all 25.98 38.49 84,031 16
cinje_flush_first 0.03 38,651.58 93 17
cinje_unsafe 6.32 158.23 14,011 12
django 438.04 2.28 1,440,080 62
jinja2 20.26 49.36 60,020 27
list_append 26.17 38.21 103,008 12
list_extend 26.01 38.44 63,008 12
mako 19.38 51.61 93,036 37
tenjin 19.02 52.58 123,012 16
tenjin_unsafe 6.47 154.47 13,012 12
tornado 58.39 17.13 353,023 23
wheezy.template 26.69 37.46 103,010 14

Python 3.5

Python 3.5 Engine msec/gen gen/sec total calls unique calls
cinje 39.12 25.56 154,011 18
cinje_fancy_all 40.14 24.91 158,038 24
cinje_fancy_first 0.04 25,150.49 174 26
cinje_flush_all 39.07 25.60 154,031 18
cinje_flush_first 0.04 25,497.59 163 19
cinje_unsafe 6.97 143.47 14,011 12
list_append 24.92 40.13 103,008 12
list_extend 24.94 40.10 63,008 12
tornado 53.70 18.62 353,023 23

Python 3.5 Nuitka

Python 3.5 Engine msec/gen gen/sec total calls unique calls
cinje 32.22 31.04 N/A N/A
cinje_fancy_all 34.22 29.22 N/A N/A
cinje_fancy_first 0.04 24,460.98 N/A N/A
cinje_flush_all 31.47 31.78 N/A N/A
cinje_flush_first 0.03 30,927.69 N/A N/A
cinje_unsafe 6.98 143.30 N/A N/A
list_append 24.88 40.20 N/A N/A
list_extend 23.59 42.39 N/A N/A
tornado 45.40 22.03 N/A N/A

Python 3.7

Python 3.7 Engine msec/gen gen/sec total calls unique calls
bottle 23.71 42.17 133,017 21
chameleon 43.44 23.02 214,043 26
cheetah3 103.86 9.63 235,020 24
cinje 20.27 49.33 82,011 14
cinje_fancy_all 21.19 47.19 86,038 22
cinje_fancy_first 0.02 43,473.55 102 22
cinje_flush_all 20.68 48.35 82,031 14
cinje_flush_first 0.02 47,937.54 91 15
cinje_unsafe 5.92 168.86 12,011 10
genshi 378.40 2.64 1,519,217 72
jinja2 17.53 57.06 61,022 29
list_append 24.96 40.07 103,008 12
list_extend 21.46 46.60 63,008 12
mako 20.59 48.57 93,036 37
tenjin 19.98 50.04 123,012 16
tenjin_unsafe 8.93 111.98 13,012 12
tornado 60.32 16.58 353,023 23

[Recommendation from the above: stop naming your template engine like it's fast. Cheetah and Tornado are two of the worst performers.]

Python 3.8

Note: these results were generated on a desktop workstation, whereas the above were generated on a workstation-class notebook. These results may, as a whole, greatly exceed prior ones.

Python 3.8 Engine msec/gen gen/sec total calls unique calls
bottle 14.65 68.27 133,017 21
chameleon 28.47 35.13 181,037 23
cheetah3 73.21 13.66 245,020 23
cinje 17.65 56.67 82,011 14
cinje_fancy_all 17.22 58.07 86,038 22
cinje_fancy_first 0.02 52,769.31 102 22
cinje_flush_all 16.51 60.55 82,031 14
cinje_flush_first 0.02 55,385.15 91 15
cinje_unsafe 4.01 249.44 12,011 10
genshi 233.91 4.28 1,519,211 71
jinja2 21.23 47.11 121,043 36
json.dumps 2.32 430.61 11 11
list_append 9.48 105.51 113,008 11
list_extend 8.26 121.09 73,008 11
mako 15.12 66.13 93,044 41
tenjin 12.48 80.13 123,012 16
tenjin_unsafe 6.23 160.58 13,012 12
tornado 35.08 28.50 353,023 23
wheezy_template 9.33 107.24 113,010 13

Note: The JSON results may look a little odd. On one hand, there are virtually no function calls involved. This is because this process is backed by C code from CPython's standard library. Second, that its performance is atrocious.

User Contributed Performance

Community members occasionally test out template rendering performance in non-Python engines. These statistics are non-normalized (i.e. not useful for relative comparison) based on hand-translation of the bigtable suite into forms consumable by those third-party packages.

Engine Hardware Version msec/gen gen/sec vs. py3.4
Mithril Intel i5-2500 0.2.5 260 3.85 – 9,269x