Skip to content

Commit

Permalink
#12154 Setup minimal benchmarks using codspeed.io (#12161)
Browse files Browse the repository at this point in the history
  • Loading branch information
itamarst committed May 8, 2024
2 parents 6b818b1 + 4643020 commit 7b0676b
Show file tree
Hide file tree
Showing 5 changed files with 144 additions and 0 deletions.
23 changes: 23 additions & 0 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,29 @@ jobs:
name: ${{ matrix.python-version || env.DEFAULT_PYTHON_VERSION }}-${{matrix.job-name || 'default-tests' }}
fail_ci_if_error: true

benchmarks:
runs-on: ubuntu-22.04

steps:
- uses: actions/checkout@v3

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '${{ env.DEFAULT_PYTHON_VERSION }}'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install . pytest pytest-codspeed
- name: Run benchmarks
uses: CodSpeedHQ/action@v2
with:
token: ${{ secrets.CODSPEED_TOKEN }}
# codspeed runs this command under a CPU emulator, so it's super-slow,
# so we try to just do the benchmarks and nothing else.
run: python -m pytest --codspeed benchmarks/

static-checks:
runs-on: ubuntu-22.04
Expand Down Expand Up @@ -408,6 +430,7 @@ jobs:
runs-on: ubuntu-latest
# Here should be the list of all the other jobs defined in this file.
needs:
- benchmarks
- testing
- narrativedocs
- apidocs
Expand Down
24 changes: 24 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Benchmarks, to be run by codspeed.io in CI

Benchmarks are run using `pytest`: `pytest benchmarks/`.
This is unlike normal Twisted tests, that use `trial`.

## Running benchmarks as part of CI

In CI, we install `pytest-codspeed` run the benchmarks, using GitHub Actions.
The tests are executed on GitHub VMs and the reports are sent to the codspeed.io cloud.

Note that as of mid-2024, codspeed.io uses a simulated CPU (Cachegrind) to run tests, so the measures of performance are not suitable for optimizing low-level compiled code.

## Running benchmarks locally

You can run benchmarks locally by installing `pytest-benchmark` and then running `pytest benchmarks/`.
Unlike `pytest-codspeed`, the results are specific to your computer, but they're helpful for local before/after comparisons.
And `pytest-codspeed` outputs nothing when run locally, at least at the time of writing (May 2024).

## Writing benchmarks

See the [Codspeed documentation](https://docs.codspeed.io/benchmarks/python).

Note that the `@pytest.mark.benchmark` style of benchmark doesn't work with `pytest-benchmark`, so you should not use it.
Instead, use the `benchmark` pytest fixture (i.e. an argument with that name to the test functions).
40 changes: 40 additions & 0 deletions benchmarks/test_web_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.

"""
Benchmarks for C{HTTP11ClientProtocol}.
"""

from twisted.internet.testing import StringTransport
from twisted.web._newclient import HTTP11ClientProtocol, Request
from twisted.web.http_headers import Headers

RESPONSE = """HTTP/1.1 200 OK
Host: blah
Foo: bar
Gaz: baz
Content-length: 3
abc""".replace(
"\n", "\r\n"
).encode(
"utf-8"
)


def test_http_client_small_response(benchmark):
"""Measure the time to run a simple HTTP 1.1 client request."""

def go():
protocol = HTTP11ClientProtocol()
protocol.makeConnection(StringTransport())
request = Request(
b"GET", b"/foo/bar", Headers({b"Host": [b"example.com"]}), None
)
response = protocol.request(request)
protocol.dataReceived(RESPONSE)
result = []
response.addCallback(result.append)
assert result

benchmark(go)
57 changes: 57 additions & 0 deletions benchmarks/test_web_server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.

"""
Benchmarks for the C{twisted.web} server.
"""

from twisted.internet.testing import StringTransport
from twisted.web import resource, server


class Data(resource.Resource):
"""
This is a static, in-memory resource.
"""

isLeaf = True

def getChild(self, name):
return self

def __init__(self, data, type):
resource.Resource.__init__(self)
self.data = data
self.data_len = b"%d" % (len(self.data),)
self.type = type

def render_GET(self, request):
request.setHeader(b"content-type", self.type)
request.setHeader(b"content-length", self.data_len)
return self.data


def test_http11_server_empty_request(benchmark):
"""Benchmark of handling an bodyless HTTP/1.1 request."""
data = Data(b"This is a result hello hello" * 4, b"text/plain")
factory = server.Site(data)

def go():
transport = StringTransport()
protocol = factory.buildProtocol(None)
protocol.makeConnection(transport)
protocol.dataReceived(
b"""\
GET / HTTP/1.1
Host: example.com
User-Agent: XXX
Time: XXXX
Content-Length: 0
""".replace(
b"\n", b"\r\n"
)
)
assert b"200 OK" in transport.io.getvalue()

benchmark(go)
Empty file.

0 comments on commit 7b0676b

Please sign in to comment.