Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add codspeed workflow to run benchmarks #382

Merged
merged 45 commits into from
May 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
593b672
Add very very basic benchmark test just to get things rolling
SolarDrew Apr 17, 2024
de7ef0b
Add tox env
SolarDrew Apr 17, 2024
dac0e20
Add benchmarks tox env to gh jobs list
SolarDrew Apr 17, 2024
a99d255
Move unzipping out of benchmark to just measure opening the asdf
SolarDrew Apr 18, 2024
bfc31c5
Pull benchmark data for latest release and compare
SolarDrew Apr 22, 2024
76e4e92
Add tox env for benchmarking main and tweak not-main benchmark run
SolarDrew Apr 22, 2024
b6d0792
Add workflow to run benchmarks on main
SolarDrew Apr 22, 2024
82332ea
Make file copying work in tox
SolarDrew Apr 24, 2024
0d59140
Make github workflow push main benchmark results to repo
SolarDrew Apr 24, 2024
2edc30d
Make some steps towards getting gh worklow working
SolarDrew Apr 24, 2024
075e6de
Need to actually push changes for github workflow to see them
SolarDrew Apr 24, 2024
3e3cafd
Make workflow push benchmark to repo
SolarDrew Apr 24, 2024
234af5d
Make tox do it instead
SolarDrew Apr 24, 2024
36bd260
Using different version of Python for workflow and local clearly a ba…
SolarDrew Apr 24, 2024
fe83f56
Committing needs user config
SolarDrew Apr 24, 2024
31895ed
Need to know why push isn't working
SolarDrew Apr 25, 2024
18700f2
Try disabling git terminal prompts
SolarDrew Apr 25, 2024
ddde74a
Well that hasn't helped, it just errors instead of hanging
SolarDrew Apr 25, 2024
831c41d
Try doing the git bit in the workflow instead again
SolarDrew Apr 25, 2024
fae780a
Share volumes across docker images so we can commit results in the wo…
SolarDrew Apr 26, 2024
1807001
Move benchmark workflow out of main tox action into codspeed
SolarDrew May 8, 2024
c834e31
Need to actually install the repo, that will help
SolarDrew May 8, 2024
96d20c5
Use the actual codspeed flag for pytest
SolarDrew May 8, 2024
ed3f1aa
Add a basic plotting benchmark
SolarDrew May 8, 2024
a3ae0d7
isort and formatting stuff
SolarDrew May 8, 2024
06132e8
Add changelog
SolarDrew May 8, 2024
658f05f
Don't need the previous benchmark yaml
SolarDrew May 8, 2024
f9d8979
I remember how this benchmarking stuff actually works...
SolarDrew May 8, 2024
dcc8ae7
Tidying
SolarDrew May 8, 2024
ca4cbbb
Split file and data fixtures for large_visp_dataset for cases where w…
SolarDrew May 8, 2024
c642559
More tidying
SolarDrew May 8, 2024
be5d36f
Move pytest-codecov plugin installation
SolarDrew May 8, 2024
653cefa
Update .github/workflows/codspeed.yml
SolarDrew May 8, 2024
794360c
Merge branch 'benchmarking-codspeed' of github.com:SolarDrew/dkist in…
SolarDrew May 8, 2024
f535f27
Add benchmark as a marker to keep pytest quiet
SolarDrew May 8, 2024
262bfc3
Yep did that wrong
SolarDrew May 8, 2024
4fa2e2c
Sure this used to just make up a sensible default but whatever
SolarDrew May 8, 2024
2a55fae
Reinstate benchmark tox env for local testing
SolarDrew May 8, 2024
24dbaba
Test all the combinations of axes because why not
SolarDrew May 8, 2024
f833f8c
Gotta close those figures
SolarDrew May 8, 2024
b331794
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] May 8, 2024
c4e2f2f
Merge branch 'main' of github.com:DKISTDC/dkist
SolarDrew May 9, 2024
6fecc26
Update dkist/tests/test_benchmarks.py
SolarDrew May 9, 2024
d8ef441
Merge branch 'main' into benchmarking-codspeed
SolarDrew May 9, 2024
ac4c26d
Don't need to run benchmarks when we do the other tests
SolarDrew May 9, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 23 additions & 0 deletions .github/workflows/codspeed.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: codspeed-benchmarks

on:
push:
branches:
- "main"
pull_request:
workflow_dispatch:

jobs:
benchmarks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- run: python -m pip install .[tests] pytest-codspeed
- name: Run benchmarks
uses: CodspeedHQ/action@v2
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: "pytest -vvv -r fEs --pyargs dkist --codspeed"
1 change: 1 addition & 0 deletions changelog/382.feature.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Add GitHub workflow and dependencies for Codspeed, to benchmark PRs against main.
17 changes: 11 additions & 6 deletions dkist/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -307,7 +307,16 @@ def small_visp_dataset():


@pytest.fixture(scope="session")
def large_visp_dataset(tmp_path_factory):
def large_visp_dataset_file(tmp_path_factory):
vispdir = tmp_path_factory.mktemp("data")
with gzip.open(Path(rootdir) / "large_visp.asdf.gz", mode="rb") as gfo:
with open(vispdir / "test_visp.asdf", mode="wb") as afo:
afo.write(gfo.read())
return vispdir / "test_visp.asdf"


@pytest.fixture(scope="session")
def large_visp_dataset(large_visp_dataset_file):
# This dataset was generated by the following code:
# from dkist_data_simulator.spec214.visp import SimpleVISPDataset
# from dkist_inventory.asdf_generator import dataset_from_fits
Expand All @@ -319,8 +328,4 @@ def large_visp_dataset(tmp_path_factory):
# ds.generate_files(vispdir)
# dataset_from_fits(vispdir, "test_visp.asdf")

vispdir = tmp_path_factory.mktemp("data")
with gzip.open(Path(rootdir) / "large_visp.asdf.gz", mode="rb") as gfo:
with open(vispdir / "test_visp.asdf", mode="wb") as afo:
afo.write(gfo.read())
return load_dataset(vispdir / "test_visp.asdf")
return load_dataset(large_visp_dataset_file)
26 changes: 26 additions & 0 deletions dkist/tests/test_benchmarks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import matplotlib.pyplot as plt
import pytest

from dkist import load_dataset


@pytest.mark.benchmark
def test_load_asdf(benchmark, large_visp_dataset_file):
benchmark(load_dataset, large_visp_dataset_file)

Check warning on line 9 in dkist/tests/test_benchmarks.py

View check run for this annotation

Codecov / codecov/patch

dkist/tests/test_benchmarks.py#L9

Added line #L9 was not covered by tests


@pytest.mark.benchmark
@pytest.mark.parametrize("axes", [
["y", "x", None, None],
["y", None, "x", None],
["y", None, None, "x"],
[None, "y", "x", None],
[None, "y", None, "x"],
[None, None, "y", "x"],
])
def test_plot_dataset(benchmark, axes, large_visp_dataset):
@benchmark
def plot_and_save_fig(ds=large_visp_dataset, axes=axes):
ds.plot(plot_axes=axes)
plt.savefig("tmpplot")
plt.close()

Check warning on line 26 in dkist/tests/test_benchmarks.py

View check run for this annotation

Codecov / codecov/patch

dkist/tests/test_benchmarks.py#L22-L26

Added lines #L22 - L26 were not covered by tests
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ tests = [
"pytest-mpl",
"pytest-httpserver",
"pytest-filter-subpackage",
"pytest-benchmark",
"hypothesis",
"tox",
"pydot",
Expand Down
1 change: 1 addition & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ addopts = --doctest-rst -p no:unraisableexception -p no:threadexception
markers =
online: marks this test function as needing online connectivity.
figure: marks this test function as using hash-based Matplotlib figure verification. This mark is not meant to be directly applied, but is instead automatically applied when a test function uses the @sunpy.tests.helpers.figure_test decorator.
benchmark: marks this test as a benchmark
# Disable internet access for tests not marked remote_data
remote_data_strict = True
asdf_schema_root = dkist/io/asdf/resources/
Expand Down
8 changes: 7 additions & 1 deletion tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ envlist =
py310-oldestdeps
build_docs{,-notebooks}
codestyle
benchmarks
SolarDrew marked this conversation as resolved.
Show resolved Hide resolved

[testenv]
pypi_filter = https://raw.githubusercontent.com/sunpy/sunpy/main/.test_package_pins.txt
Expand All @@ -34,7 +35,7 @@ set_env =
COLUMNS = 180
devdeps: PIP_EXTRA_INDEX_URL = https://pypi.anaconda.org/astropy/simple https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
# Define the base test command here to allow us to add more flags for each tox factor
PYTEST_COMMAND = pytest -vvv -r fEs --pyargs dkist --cov-report=xml --cov=dkist --cov-config={toxinidir}/.coveragerc {toxinidir}/docs
PYTEST_COMMAND = pytest -vvv -r fEs --pyargs dkist --cov-report=xml --cov=dkist --cov-config={toxinidir}/.coveragerc {toxinidir}/docs --benchmark-skip
deps =
# For packages which publish nightly wheels this will pull the latest nightly
devdeps: astropy>=0.0.dev0
Expand Down Expand Up @@ -89,3 +90,8 @@ commands =
!notebooks: sphinx-build -j 1 --color -W --keep-going -b html -d _build/.doctrees . _build/html -D nb_execution_mode=off {posargs}
notebooks: sphinx-build -j 1 --color -W --keep-going -b html -d _build/.doctrees . _build/html {posargs}
python -c 'import pathlib; print("Documentation available under file://\{0\}".format(pathlib.Path(r"{toxinidir}") / "docs" / "_build" / "index.html"))'

[testenv:benchmarks]
description = Run benchmarks on PR and compare against main to ensure there are no performance regressions
allowlist_externals=git
commands = {env:PYTEST_COMMAND} -m benchmark --benchmark-autosave
Loading