Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor benchmark and add tilebench tests #649

Merged
merged 1 commit into from Oct 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
26 changes: 22 additions & 4 deletions .github/workflows/ci.yml
Expand Up @@ -32,7 +32,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e .["test"]
python -m pip install -e ".[test]"

- name: Run pre-commit
if: ${{ matrix.python-version == env.LATEST_PY_VERSION }}
Expand All @@ -41,7 +41,7 @@ jobs:
pre-commit run --all-files

- name: Run tests
run: python -m pytest --cov rio_tiler --cov-report xml --cov-report term-missing --benchmark-skip -s -vv
run: python -m pytest --cov rio_tiler --cov-report xml --cov-report term-missing -s -vv

- name: Upload Results
if: ${{ matrix.python-version == env.LATEST_PY_VERSION }}
Expand All @@ -65,10 +65,10 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e .["test"]
python -m pip install -e ".[benchmark]"

- name: Run Benchmark
run: python -m pytest --benchmark-only --benchmark-columns 'min, max, mean, median' --benchmark-sort 'min' --benchmark-json output.json
run: python -m pytest tests/benchmarks/benchmarks.py --benchmark-only --benchmark-columns 'min, max, mean, median' --benchmark-sort 'min' --benchmark-json output.json

- name: Store and Compare benchmark result
uses: benchmark-action/github-action-benchmark@v1
Expand All @@ -86,6 +86,24 @@ jobs:
auto-push: ${{ github.ref == 'refs/heads/main' }}
benchmark-data-dir-path: dev/benchmarks

benchmark-requests:
needs: [tests]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e ".[tilebench]"

- name: Run Benchmark
run: python -m pytest tests/benchmarks/requests.py -s -vv

publish:
needs: [tests]
runs-on: ubuntu-latest
Expand Down
23 changes: 15 additions & 8 deletions CONTRIBUTING.md
Expand Up @@ -5,15 +5,22 @@ Issues and pull requests are more than welcome.
### dev install

```bash
$ git clone https://github.com/cogeotiff/rio-tiler.git
$ cd rio-tiler
$ pip install -e .["dev"]
git clone https://github.com/cogeotiff/rio-tiler.git
cd rio-tiler
python -m pip install -e ".[test,dev]"
```

You can then run the tests with the following command:

```sh
python -m pytest --cov rio_tiler --cov-report term-missing --benchmark-skip
python -m pytest --cov rio_tiler --cov-report term-missing
```

##### Performance tests

```sh
python -m pip install -e ".[benchmark]"
python -m pytest tests/benchmarks/benchmarks.py --benchmark-only --benchmark-columns 'min, max, mean, median' --benchmark-sort 'min'
```

### pre-commit
Expand All @@ -27,15 +34,15 @@ $ pre-commit install
### Docs

```bash
$ git clone https://github.com/cogeotiff/rio-tiler.git
$ cd rio-tiler
$ pip install -e .["docs"]
git clone https://github.com/cogeotiff/rio-tiler.git
cd rio-tiler
python -m pip install -e .["docs"]
```

Hot-reloading docs:

```bash
$ mkdocs serve
$ mkdocs serve -f docs/mkdocs.yml
```

To manually deploy docs (note you should never need to do this because Github
Expand Down
13 changes: 11 additions & 2 deletions pyproject.toml
Expand Up @@ -36,8 +36,6 @@ dependencies = [
[project.optional-dependencies]
test = [
"pytest",
"pytest-asyncio",
"pytest-benchmark",
"pytest-cov",

# XarrayReader
Expand All @@ -46,6 +44,17 @@ test = [
# S3
"boto3",
]

benchmark = [
"pytest",
"pytest-benchmark",
]

tilebench = [
"pytest",
"tilebench",
]

dev = [
"pre-commit",
]
Expand Down
File renamed without changes.
67 changes: 67 additions & 0 deletions tests/benchmarks/requests.py
@@ -0,0 +1,67 @@
"""Test HTTP Requests."""

from tilebench import profile

from rio_tiler.io import Reader

dataset_url = "https://sentinel-cogs.s3.us-west-2.amazonaws.com/sentinel-s2-l2a-cogs/15/T/VK/2023/10/S2B_15TVK_20231008_0_L2A/TCI.tif"


def test_info():
"""Info should only GET the header."""

@profile(
kernels=True,
config={
"GDAL_HTTP_MERGE_CONSECUTIVE_RANGES": "YES",
"GDAL_DISABLE_READDIR_ON_OPEN": "EMPTY_DIR",
"GDAL_INGESTED_BYTES_AT_OPEN": 32768,
},
quiet=True,
add_to_return=True,
)
def info(src_path: str):
with Reader(src_path) as src_dst:
return src_dst.info()

_, stats = info(dataset_url)
assert stats["HEAD"]["count"] == 1
assert stats["GET"]["count"] == 1
assert stats["GET"]["bytes"] == 32768
assert stats["GET"]["ranges"] == ["0-32767"]
assert not stats["WarpKernels"]


def test_tile_read():
"""Tile Read tests."""

@profile(
kernels=True,
config={
"GDAL_HTTP_MERGE_CONSECUTIVE_RANGES": "YES",
"GDAL_DISABLE_READDIR_ON_OPEN": "EMPTY_DIR",
"GDAL_INGESTED_BYTES_AT_OPEN": 32768,
"CPL_VSIL_CURL_NON_CACHED": f"/vsicurl/{dataset_url}",
},
quiet=True,
add_to_return=True,
)
def tile(src_path: str, x: int, y: int, z: int):
with Reader(src_path) as src_dst:
return src_dst.tile(x, y, z, tilesize=256)

_, stats = tile(
dataset_url,
493,
741,
11,
)
assert stats["HEAD"]["count"] == 1
assert stats["GET"]["count"] == 3
assert stats["GET"]["bytes"] == 2293760
assert stats["GET"]["ranges"] == [
"0-32767",
"9011200-10158079",
"11075584-12189695",
]
assert stats["WarpKernels"]