Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggest in review template to refer to test requirements #198

Closed
rhine3 opened this issue Feb 27, 2023 · 10 comments · Fixed by #203
Closed

Suggest in review template to refer to test requirements #198

rhine3 opened this issue Feb 27, 2023 · 10 comments · Fixed by #203

Comments

@rhine3
Copy link

rhine3 commented Feb 27, 2023

While reviewing Crowsetta (pyOpenSci/software-submission#68), I thought of the following suggestion:

I wonder if PyOpenSci’s review template could suggest that reviewers make sure to look in the package's documentation to figure out how the package maintainers request tests be run, e.g., using a development install of the package.

...because a handful of tests failed at first when I attempted to use a conda-installed older version of Crowsetta to run the version of tests that were in Crowsetta's development branch. Had I looked at the package documentation first, I would've found the recommendation to run tests within a development install.

@NickleDave
Copy link
Contributor

Thank you @rhine3

👍 +1 for adding something in the review template as you suggest

maybe we can revise the following lines to be more explicit?

- [ ] **Installation:** Installation succeeds as documented.

- [ ] **Installation:** Installation succeeds as documented:
  - [ ] Installing off of the specified package index or indexes succeeds (PyPI, conda-forge, bioconda, etc.) 
  - [ ] Setting up a development environment and installing the package into it as described in the development documentation succeeds.

- [ ] **Automated tests:** Tests cover essential functions of the package and a reasonable range of inputs and conditions. All tests pass on the local machine.

to be more like

- [ ] **Automated tests:** 
  - [ ] All tests pass on the local machine when run in a development environment set up according to the package's instructions for setting up that environment.
  - [ ] Tests cover essential functions of the package and a reasonable range of inputs and conditions. 

@lwasser
Copy link
Member

lwasser commented Feb 27, 2023

👋 hi there @rhine3 welcome to our peer review guide repo and thank you for reviewing crowsetta!

i'm curious here about this speciically - @NickleDave is the issue here because tessa ran the tests on the installed version but you had updated things in the GH repo so there was a disconnect between the code content in the tests in your GH repo and the installed version of your package? i. know we chatted in slack but there were a few different issues there.

I'm wondering why tests would fail (maybe no data?? or no pytest or whatever you are using) on an installed version. what were the points of failure?

Many thanks - just documenting everything here and trying to think through what the resulting template should say! I definitely think it's a great idea to test the dev environment and associated instructions.

@NickleDave
Copy link
Contributor

NickleDave commented Feb 27, 2023

is the issue here because tessa ran the tests on the installed version but you had updated things in the GH repo so there was a disconnect between the code content in the tests in your GH repo and the installed version of your package?

Yes, see vocalpy/crowsetta#218.
And also vocalpy/crowsetta#222 where the other reviewer had a similar issue -- they could not run the tests because they installed off of conda forge, so a pytest fixture was missing (since pytest is an optional test dependency)

I had assumed people would set up a dev env according to the instructions but obviously that wasn't clear.

@cmarmo
Copy link
Member

cmarmo commented Mar 1, 2023

to be more like

- [ ] **Automated tests:** 
  - [ ] All tests pass on the local machine when run in a development environment set up according to the package's instructions for setting up that environment.
  - [ ] Tests cover essential functions of the package and a reasonable range of inputs and conditions. 

If I may, tests are a critical tool in assessing the portability of the package on different architectures and environments.
Reviews are an excellent opportunity to check about portability.
I agree that a single test failing on a complete different environment is not a reason to discard a submission, but failing tests may be a hint to fix real issues in the package.
If I understand correctly for the crowsetta review the issue was that the last version of the tests (cloned from github) was ahead the version of the installed package, am I wrong?
In that case, I'd rather be explicit on the exact commit the reviewers should download to check if tests pass.
This means:

  1. ask the author to fill the "version submitted" form (maybe allowing the use of a tag for reference to the git snapshot)
  2. modify the template as
    - [ ] All tests pass on the local machine for the version corresponding to the submitted tag
    
  3. Documenting for reviewers that once the repository cloned they need to git checkout <submitted-tag> in order to run the right tests.

Does this make sense to you?

@cmarmo
Copy link
Member

cmarmo commented Mar 1, 2023

And also vocalpy/crowsetta#222 where the other reviewer had a similar issue -- they could not run the tests because they installed off of conda forge, so a pytest fixture was missing (since pytest is an optional test dependency)

Indeed, so reviewers should also be advised to install pytest as this is the bare minimum to run the tests.... sorry forgot this one... 😅

@lwasser
Copy link
Member

lwasser commented Mar 14, 2023

@cmarmo i'm sorry i've been so slow in responding to issues here - so much has been going on. But i think your suggestions are really great ones.

I think there is space for some of this also to go into our guide when we discuss tests!
but i absolutely agree that we want to modify the template as well.

Repository Link: https://github.com/Robaina/Pynteny
Version submitted: v0.0.5
Editor: @arianesasso

does it make sense to add another item to the header related to link to install version submitted or something like that ? i plan to make that template into a form especially after the feedback that @Midnighter provided here so i'm just wondering if this is an opportunity to work some other changes into our form. and maybe in the review started template we remind reviewers to use the submitted version to run tests or something to that effect?

let me know if i'm overcomplicating things - it is the end of the day here :)

@lwasser
Copy link
Member

lwasser commented Mar 14, 2023

my apologies we are talking about the review template for DOING A REVIEW not the submission template. i conflated two issues -

@lwasser
Copy link
Member

lwasser commented Mar 14, 2023

friends - does this relate to #197 ? just going through issues here and see some overlap.

@cmarmo
Copy link
Member

cmarmo commented Mar 20, 2023

friends - does this relate to #197 ? just going through issues here and see some overlap.

It is related.
To better separate them, I think my

  1. ask the author to fill the "version submitted" form (maybe allowing the use of a tag for reference to the git snapshot)

is more relevant in #197.

This one is about clarify the process for reviewers:

  1. updating the template
  2. updating the documentation

@cmarmo
Copy link
Member

cmarmo commented Mar 20, 2023

I have created #203. WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants