-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python testing framework and workflow #45
base: main
Are you sure you want to change the base?
Conversation
…is actually tested
… but numerical differences persist in many columns
Coming back to this PR I am now noticing a number of differences between the 'new' computed output and the reference output. On the Eliminating these from the output and comparing floating points, most values agree except 3 or 4 columns including |
Following up on this, I notice differences between reference I assume some steps are not deterministic, so should I ignore certain output columns? |
@gilbertozp is this blocked on the comment above from Matt? Or can this be merged? |
@gilbertozp is taking a look at this |
Add python testing framework and github actions workflow for automated testing. The PR downloads
US-ARc_sample*
data and sets up the nighttime partitioning step. The test runs in a few seconds and the generated output (*.csv files) are compared for equality against reference output.The nighttime partioning step was chosen as an integration test as it runs much faster than other steps in the ONEFlux pipeline (i.e. daytime partitioning). Python unit tests will be implemented in a later PR.
TODOs:
tests/test_context.py
to use pytest.tests/C
,tests/python
.pytest
.partitioning_nt
integration testtests/python/integration/test_partitioning.py
US-ARc_sample*
and execute integration test..github/workflows/python-app.yaml
Fixes #46