Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to pytest #939

Merged
merged 22 commits into from
Jan 18, 2023
Merged

Conversation

pjhartzell
Copy link
Collaborator

@pjhartzell pjhartzell commented Jan 5, 2023

Related Issue(s):

Description:

  • Switches from unittest to pytest.
  • Adds pytest parametrizations in place of for loops and subTests.
  • The tests directory was removed from the coverage measurement to better represent the pystac code coverage. This results in a lower coverage percentage, so the benchmark (fail_under) was lowered from 94% to 90%.
  • Adds tests for the html module.

PR Checklist:

  • Code is formatted (run pre-commit run --all-files)
  • Tests pass (run scripts/test)
  • Documentation has been updated to reflect changes, if applicable
  • This PR maintains or improves overall codebase code coverage.
  • Changes are added to the CHANGELOG. See the docs for information about adding to the changelog.

@pjhartzell
Copy link
Collaborator Author

This the minimum set of changes to use pytest. Per #931 we may want to look at using more pytest features.

@gadomski gadomski added the tests Issues or pull requests that relate to the test suite label Jan 6, 2023
@gadomski gadomski linked an issue Jan 6, 2023 that may be closed by this pull request
@codecov-commenter
Copy link

codecov-commenter commented Jan 6, 2023

Codecov Report

Base: 94.43% // Head: 90.09% // Decreases project coverage by -4.35% ⚠️

Coverage data is based on head (cc9222e) compared to base (3c6aa81).
Patch has no changes to coverable lines.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #939      +/-   ##
==========================================
- Coverage   94.43%   90.09%   -4.35%     
==========================================
  Files          83       47      -36     
  Lines       12067     6057    -6010     
  Branches     1144      905     -239     
==========================================
- Hits        11396     5457    -5939     
+ Misses        492      420      -72     
- Partials      179      180       +1     
Impacted Files Coverage Δ
pystac/__init__.py 93.87% <0.00%> (-6.13%) ⬇️
pystac/serialization/common_properties.py 74.07% <0.00%> (-5.56%) ⬇️
tests/extensions/test_label.py
tests/extensions/test_pointcloud.py
tests/extensions/test_projection.py
tests/extensions/test_timestamps.py
tests/extensions/test_version.py
tests/extensions/test_view.py
tests/serialization/test_identify.py
tests/serialization/test_migrate.py
... and 37 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

We don't need to specify any of this stuff -- it's better to let the user pick
their preferred setup.
Copy link
Member

@gadomski gadomski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. A couple of notes / changes:

  1. While I think we should avoid any huge test refactors in this PR, there are a couple of places where pytest gets us some quick wins. E.g.
    def test_testcases(self) -> None:
    for catalog in TestCases.all_test_catalogs():
    catalog = catalog.full_copy()
    ctypes = [
    CatalogType.ABSOLUTE_PUBLISHED,
    CatalogType.RELATIVE_PUBLISHED,
    CatalogType.SELF_CONTAINED,
    ]
    for catalog_type in ctypes:
    with self.subTest(
    title="Catalog {} [{}]".format(catalog.id, catalog_type)
    ):
    self.do_test(catalog, catalog_type)
    could be made a lot simpler (and report a lot better) using pytest's parameterized tests. Could you update that test, and any other similar ones, to use parameterization?
  2. I don't think it's worth going through and updating every test to use pytest's fixtures right now, but could you open an issue to capture that future task?
  3. Same as above but for pytest-vcr, we could robustify some of our tests by using that instead of hitting the network every time, but that can be its own task that we do after this PR is merged. Could you open an issue for that as well?

Thanks!

scripts/test Outdated Show resolved Hide resolved
@gadomski gadomski changed the title feat: switch to pytest Switch to pytest Jan 6, 2023
@gadomski gadomski added this to the 1.7 milestone Jan 9, 2023
@pjhartzell
Copy link
Collaborator Author

pjhartzell commented Jan 10, 2023

A comment here about including tests (and not just pystac) in coverage (see the .coveragerc file). My initial reaction is that coverage should measure only what is being tested, i.e., only the pystac package. Unused code or bugs in the tests is a distinct topic that should not get in the way of a clean report of how well the pystac package is covered by the tests.

However, I see that tests was explicitly added by @l0b0 in 3e5fed6. Any thoughts from the STAC community on this?

@l0b0
Copy link
Contributor

l0b0 commented Jan 10, 2023

However, I see that tests was explicitly added by @l0b0 in 3e5fed6. Any thoughts from the STAC community on this?

I believe that was in response to a test (or test utility, I don't remember) which was not being run. Test code should always have 100% coverage, no exceptions, so it shouldn't really clutter up any reports.

So, basically, while "meta-coverage" it answers a different question than production code coverage, it's still a useful thing to know.

@gadomski
Copy link
Member

So, basically, while "meta-coverage" it answers a different question than production code coverage, it's still a useful thing to know.

To me, it's confusing to include test code coverage in the "core" coverage report. If we do want to check for non-run tests, IMO that should be part of a separate coverage run. I also am personally 👍🏽 for dropping it as a part of CI -- if we're worried about non-run tests, its not too hard to do a one-off coverage run of just the test suite.

Copy link
Member

@gadomski gadomski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we're removing --cov=tests, I opened #951 to capture some non-run test lines I found from a manual run.

@pjhartzell
Copy link
Collaborator Author

Since we're removing --cov=tests

I haven't removed that yet, but, yeah, 👍 on the new issue. I'll remove the tests directory coverage from the .coveragerc file.

@pjhartzell
Copy link
Collaborator Author

@gadomski
After removing the tests directory from coverage, the coverage percentage dropped to 90.42% --> I set the fail_under value in .coveragerc to 90.4.

@gadomski
Copy link
Member

After removing the tests directory from coverage, the coverage percentage dropped to 90.42% --> I set the fail_under value in .coveragerc to 90.4.

Yeah, I think that's a good thing -- the tests directory was giving us an over-estimate of project coverage.

There's two options here

  1. Merge with "non-passing" coverage, and open an issue to raise coverage above 90%
  2. Up the coverage before merge

IMO if there's an obvious place we can add a test or two to this PR to get coverage over 90%, that'll make CI happy w/o dropping our threshold too low.

@pjhartzell
Copy link
Collaborator Author

  • Coverage was at 94.13%. The fail under was set to 94.0%.
  • After removing the tests directory, coverage is now at 90.42%. I set a new fail under to 90.4%. We can make that a cleaner number (90.0%) if desired.

IMO if there's an obvious place we can add a test or two to this PR to get coverage over 90%, that'll make CI happy w/o dropping our threshold too low.

We are still above 90%. I think there was some number confusion. Hopefully, the above bullets clear that up.

@gadomski
Copy link
Member

@pjhartzell
Copy link
Collaborator Author

Oops. Yeah, just saw that. Will look into this.

.github/workflows/continuous-integration.yml Outdated Show resolved Hide resolved
tests/html/test_html.py Outdated Show resolved Hide resolved
@pjhartzell pjhartzell marked this pull request as ready for review January 18, 2023 18:44
@pjhartzell pjhartzell merged commit 698e963 into stac-utils:main Jan 18, 2023
@pjhartzell pjhartzell deleted the issues/931-convert-to-pytest branch January 18, 2023 20:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Issues or pull requests that relate to the test suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Convert to pytest
4 participants