Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unit test marking #104

Closed
wants to merge 6 commits into from
Closed

Unit test marking #104

wants to merge 6 commits into from

Conversation

mattkwiecien
Copy link
Contributor

@mattkwiecien mattkwiecien commented Jan 5, 2024

Adding markers to the pytest test configuration and marking tests that are precision sensitive or slow.

This is an incremental step to improve unit test speeds by separating unit tests (which users can run all of the time) from integration tests which can take very long. Ideally we can refactor the tests to have integration tests run nightly on github actions.

Also marking tests that depend on numerical precision as precision sensitive. By identifying these now, we can be sure the code isn't broken if scipy or numpy decides to make small modifications to their numerical precision.

Using markers to identify slow tests and precision sensitive tests.
Small CovarianceIO refactor.

return s
sacc_clone.save_fits(output_file_nm, overwrite=overwrite)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, not sure if not returning could break the integration with TXPipe. If you think it's better not returning it, I'd check what TXPipe is doing, otherwise I'd just add a TODO or sth like that.
In my opinion, I think it's convenient to get it returned there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah excellent call. I meant to be more careful about changing existing functionality. I will check and make sure nothing is breaking other tools.

Copy link
Contributor

@carlosggarcia carlosggarcia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but before merging check that you don't break the integration with TXPipe. (see my comment above)

from tjpcov.covariance_builder import CovarianceBuilder
from scipy.linalg import block_diag

INPUT_YML = "./tests/data/conf_covariance_builder_minimal.yaml"
OUTDIR = "tests/tmp/"


def test_no_config_throws():
mock_builder = MagicMock(CovarianceBuilder)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What magic is MagicMock doing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use unittest.mock here to test the logic in the base class without needing to instantiate any concrete implementations. This is preferable because then we aren't implicitly testing something else.

MagicMock is a regular Mock that also allows you to mock dunder methods (e.g. __new__). In this case the added mocking allows us to directly call the __init__ function on the base class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants