Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plenoptic #150

Open
16 of 30 tasks
billbrod opened this issue Nov 17, 2023 · 36 comments
Open
16 of 30 tasks

Plenoptic #150

billbrod opened this issue Nov 17, 2023 · 36 comments

Comments

@billbrod
Copy link

billbrod commented Nov 17, 2023

Submitting Author: Billy Broderick (@billbrod)
All current maintainers: @billbrod (there are several other authors / contributors, but I'm the only committed to maintaining)
Package Name: Plenoptic
One-Line Description of Package: a python library for model-based synthesis of perceptual stimuli
Repository Link: https://github.com/labForComputationalVision/plenoptic/
Version submitted: v1.0.2
Editor: @sneakers-the-rat
Reviewer 1: @NickleDave
Reviewer 2: @DanielaPamplona
Reviewers assigned: 2023-12-13
Reviews due: 2024-01-19
Archive: TBD
JOSS DOI: TBD
Version accepted: TBD
Date accepted (month/day/year): TBD


Code of Conduct & Commitment to Maintain Package

Description

  • Include a brief paragraph describing what your package does: plenoptic provides tools to help researchers understand their model by synthesizing novel informative stimuli (e.g., images, movies, or sound clips, depending on the model), which help build intuition for what features the model ignores and what it is sensitive to.

Scope

  • Please indicate which category or categories.
    Check out our package scope page to learn more about our
    scope. (If you are unsure of which category you fit, we suggest you make a pre-submission inquiry):

    • Data retrieval
    • Data extraction
    • Data processing/munging
    • Data deposition
    • Data validation and testing
    • Data visualization1
    • Workflow automation
    • Citation management and bibliometrics
    • Scientific software wrappers
    • Database interoperability

Domain Specific & Community Partnerships

- [ ] Geospatial
- [ ] Education
- [ ] Pangeo

Community Partnerships

If your package is associated with an
existing community please check below:

  • For all submissions, explain how the and why the package falls under the categories you indicated above. In your explanation, please address the following points (briefly, 1-2 sentences for each):

I am not sure which of the above categories I fit under, but the discussion in my pre-submission inquiry (#97) assured me that I'm in scope.

  • Who is the target audience and what are scientific applications of this package?

Researchers in vision science, neuroscience, and machine learning. The goal is to generate novel stimuli (images, videos, audio) that researchers can then use in new experiments to better understand their computational models.

  • Are there other Python packages that accomplish the same thing? If so, how does yours differ?

Not that I'm aware of. There are several bits of related research code, but no packages. Research code includes: https://github.com/ArturoDeza/Fast-Texforms, https://github.com/brialorelle/texformgen, https://github.com/ProgramofComputerGraphics/PooledStatisticsMetamers, https://github.com/freeman-lab/metamers/, among others.

  • If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted:

#97 , @NickleDave

Technical checks

For details about the pyOpenSci packaging requirements, see our packaging guide. Confirm each of the following by checking the box. This package:

  • does not violate the Terms of Service of any service it interacts with.
  • uses an OSI approved license.
  • contains a README with instructions for installing the development version.
  • includes documentation with examples for all functions.
  • contains a tutorial with examples of its essential functions and uses.
  • has a test suite.
  • has continuous integration setup, such as GitHub Actions CircleCI, and/or others.

Publication Options

JOSS Checks
  • The package has an obvious research application according to JOSS's definition in their submission requirements. Be aware that completing the pyOpenSci review process does not guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
  • The package is not a "minor utility" as defined by JOSS's submission requirements: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
  • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
  • The package is deposited in a long-term repository with the DOI:

Note: JOSS accepts our review as theirs. You will NOT need to go through another full review. JOSS will only review your paper.md file. Be sure to link to this pyOpenSci issue when a JOSS issue is opened for your package. Also be sure to tell the JOSS editor that this is a pyOpenSci reviewed package once you reach this step.

Does the JOSS and pyOpenSci review happen simultaneously? I was going to add the paper.md and submit to JOSS after going through this review.

Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?

This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.

  • Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.

Confirm each of the following by checking the box.

  • I have read the author guide.
  • I expect to maintain this package for at least 2 years and can help find a replacement for the maintainer (team) if needed.

Please fill out our survey

P.S. Have feedback/comments about our review process? Leave a comment here

Editor and Review Templates

The editor template can be found here.

The review template can be found here.

Footnotes

  1. Please fill out a pre-submission inquiry before submitting a data visualization package.

@NickleDave
Copy link
Contributor

Hi @billbrod, thank you for your detailed submission.

Just letting you know we are working on getting the review started.

@sneakers-the-rat will be the editor and they are going to do the initial checks.

We're still looking for one reviewer (the other one is me 🙂 -- sorry @sneakers-the-rat for stealing your thunder 😝 )

@sneakers-the-rat
Copy link
Collaborator

sneakers-the-rat commented Dec 5, 2023

Editor response to review:


Editor comments

👋 Hi @NickleDave and @DanielaPamplona Thank you for volunteering to review for pyOpenSci!

Please fill out our pre-review survey

Before beginning your review, please fill out our pre-review survey. This helps us improve all aspects of our review and better understand our community. No personal data will be shared from this survey - it will only be used in an aggregated format by our Executive Director to improve our processes and programs.

  • reviewer 1 survey completed.
  • reviewer 2 survey completed.
  • reviewer 3 (if applicable)

The following resources will help you complete your review:

  1. Here is the reviewers guide. This guide contains all of the steps and information needed to complete your review.
  2. Here is the review template that you will need to fill out and submit
    here as a comment, once your review is complete.

Please get in touch with any questions or concerns! Your review is due: Friday, January 5th, 2024

Reviewers: @NickleDave and @DanielaPamplona
Due date: Friday, January 5th, 2024

@sneakers-the-rat
Copy link
Collaborator

We have located reviewers! Thank you @NickleDave and @DanielaPamplona for your time, and looking forward to digging into this package with you all!

Please see the above post with links to the reviewer's guide and next steps.

The next step will be to copy the review checklist template as a comment here, and as you work through the checklist the authors have indicated it is ok to raise issues and make pull requests with problems and questions as they arise. Since github issues don't have threading, we will use issues raised in the repo as "threads" to keep discussion orderly :). When you raise an issue or PR, please link back to this issue so that we can track issues raised in relationship to the review. It is up to the authors if they would like to tag the issues or name them in any specific way :).

The reviewers should feel free to split up different focuses or roles depending on what plays to your strengths - both will complete the basic checklist, but you can decide if you'd rather focus on docs, tests, internal architecture, onboarding, and so on for more detailed focus. The authors can also let the reviewers know if there are any specific areas they would like feedback on or close attention on.

Let's shoot for reviews in Friday, January 5th, 2024 - gl;hf to all ♥️

@billbrod
Copy link
Author

Thanks all, looking forward to this process! While I'm looking forward to all kinds of feedback, I am specifically interested in feedback on the docs, especially the motivation for the included methods and explaining what they can be used for scientifically -- I think that's the weakest (and most important?) part of our documentation right now. Since all the primary authors for the project are from the same lab, we talked a lot offline and have a similar background.

@DanielaPamplona
Copy link

Hi,
Just to let you know that I am looking forward to this. I am silent because I did not manage to start the review, but as soon as I start, I am sure I will have tons of things to say.
Cheers,

@sneakers-the-rat
Copy link
Collaborator

This is ur friendly neighborhood editor here to say that I hope the authors and reviewers are taking a much deserved break, and to keep a reasonable and humane timeline I am making the call to reset the review window to start Jan 1 - Lets say reviews in January 19th, 2024

Have a lovely rest of the holiday

@sneakers-the-rat
Copy link
Collaborator

Hello dear reviewers @NickleDave @DanielaPamplona - checking in and seeing how things are going, if any guidance is needed or if i can be of any help getting reviews rolling!

@DanielaPamplona
Copy link

DanielaPamplona commented Jan 9, 2024 via email

@sneakers-the-rat
Copy link
Collaborator

sneakers-the-rat commented Jan 10, 2024

@DanielaPamplona

I do have a question... What is a "vignette" ?

Completely fair question. That's a term from R (see Vignettes), so more than reasonable to have not seen it before. They are sort of like longer, narrative code examples, but they don't have a strict definition - if it's easier, you can think of them as synonymous with examples.

If one were to draw the distinction, an example might look like something you'd find in a docstring like:

(show example)
def myFunction(arg):
    """
    Do a function thing

    Args:
        arg: The thing we give this function

    Example:
        To use this function, give it an argument!

        >>> arg = 1
        >>> print(myFunction(arg))
        5

        Any argument will do!

        >>> arg = 'new argument'
        >>> print(myFunction(arg))
        5

    """
    # TODO: use the argument
    return 5

Where a vignette would be a longer-form demonstration of major functionality of the package, often with real-world use that show multiple things working together.

From a quick look, everything I see in the examples directory could be called a vignette, but I also don't think you need to split hairs too-too finely there :)

@DanielaPamplona
Copy link

DanielaPamplona commented Jan 10, 2024 via email

@NickleDave
Copy link
Contributor

Thank you for checking in @sneakers-the-rat and keeping us on track -- no questions from me yet!
I just started a new job but have this on my to-do list and expect to get through the checklist this week, will raise issues with @billbrod if needed as I do and link them to this

@DanielaPamplona
Copy link

Hi,
I have a small question for the authors ( @billbrod ), but this is a bit important. Please answer this as soon as possible. I will submit a detailed review later.
I am trying to reproduce the "quick start" and I do not find the image "einstein.pgm". As a matter of fact, I do not find the folder "data". Is it somewhere on the package or you forgot to submit it? Without it, I cannot continue the review of this section...
Thank you!
Daniela Pamplona

@NickleDave
Copy link
Contributor

NickleDave commented Jan 17, 2024

Hi @DanielaPamplona I find it here:
https://github.com/LabForComputationalVision/plenoptic/blob/main/data/256/einstein.pgm
(I searched on GitHub)

I think the data folder is this one:
https://github.com/LabForComputationalVision/plenoptic/tree/main/data

So the data folder would be in the root of the project folder (/home/you/somewhere/plenoptic/) if you clone the project with git.

Looks like there's a couple of fixtures in the tests that expect it to live there (GitHub search found this for me too)
https://github.com/LabForComputationalVision/plenoptic/blob/de8f9308d0fcd24050a93b326390eb4444b9aff2/tests/conftest.py#L11
https://github.com/LabForComputationalVision/plenoptic/blob/de8f9308d0fcd24050a93b326390eb4444b9aff2/tests/conftest.py#L33

Does that help?

@sneakers-the-rat
Copy link
Collaborator

Thanks y'all - yes this is the kind of thing worth raising an issue about (with a link back to this issue). As reviewers, you are in the position of being an "average user," so if you can't figure out how something works, then some improvement can likely be made in the docs, code, or in this case packaging! Go review team go <3

@NickleDave
Copy link
Contributor

NickleDave commented Jan 19, 2024

Hi all, please find below my review.

@billbrod I would like to go through some of the docs in more detail, to give feedback on how domain-specific material is presented; I will raise additional issues then. But I want to get this in on time, and raise the initial issues for required items below.
Boxes that are not yet checked will be explained in "required", and I will raise an issue for each item.

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README.
  • Installation instructions: for the development version of the package and any non-standard dependencies in README.
  • Vignette(s) demonstrating major functionality that runs successfully locally.
  • Function Documentation: for all user-facing functions.
  • Examples for all user-facing functions.
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING.
  • Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a pyproject.toml file or elsewhere.
Readme file requirements

The package meets the readme requirements below:

  • Package has a README.md file in the root directory.

The README should include, from top to bottom:

  • The package name
  • Badges for:
    • Continuous integration and test coverage,
    • Docs building (if you have a documentation website),
    • A repostatus.org badge,
    • Python versions supported,
    • Current package version (on PyPI / Conda).

NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)

  • Short description of package goals.
  • Package installation instructions
  • Any additional setup required to use the package (authentication tokens, etc.)
  • Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
    • Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear)
  • Link to your documentation website.
  • If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
  • Citation information

Usability

Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole.
Package structure should follow general community best-practices. In general please consider whether:

  • Package documentation is clear and easy to find and use.
  • The need for the package is clear
  • All functions have documentation and associated examples for use
  • The package is easy to install

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests:
    • All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install.
    • Tests cover essential functions of the package and a reasonable range of inputs and conditions.
  • Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
  • Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
    A few notable highlights to look at:
    • Package supports modern versions of Python and not End of life versions.
    • Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

For packages also submitting to JOSS

Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: With DOIs for all those that have one (e.g. papers, datasets, software).

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 7


Review Comments

Required

In this section I will put to-dos for items in the checklist above that are not checked\

suggested

In this section I will put suggested changes. Some of these start to get very nit-picky so I won't raise issues for all of them

  • documentation

  • README

    • Badge section
      • In the badge section, the link to Zenodo is just text that says DOI; why not use the Zenodo badge?
        • DOI
    • citation
      • There is a citation section but it just links to a list of citations in the docs. I would suggest the following
        • Directly state in this section of the README how the code should be cited. E.g., "If you use the code, please cite both the code by DOI as well as the JOV paper. If you are not using the code directly then cite the paper". I would also take advantage of the citation.cff, immediately after say "you can also click on 'cite this repository' on the right side of the page to get a copyable citation". I would then include both the Zenodo badge with the code DOI and a bibtex citation for the paper in this section, just in case someone misses the part about "click on the right page". Finally I would say "for more in-depth citation lists go here (link to citation page)" DOC: clearly state how to cite in citation section of README and docs, point to citation.cff [pyOpenSci review] LabForComputationalVision/plenoptic#240
  • API

    • In general, accessing core classes and functions is verbose. I have been guilty of this too and I know this would in a sense be a breaking change, but I would strongly suggest importing core classes and functions at the top-level if possible. I want to be able to just type po.Metamer and po.remove_grad. It could be fine to leave them where they are but just import them
    • Remember file structure is an impelmentation detail and flat is better than nested
    • Reasonable people can disagree about this but I prefer to avoid namespaces with names like tools or util, because everything can go there, and as it stands now, you are just forcing users to type tools a lot. I would suggest moving modules in tools up a level and import the crucial functions directly into the plenopitc namespace from those modules. E.g. plenoptic.remove_grad. Or maybe plenoptic.grad.remove if you must have a module for it (seems weird that this function lives in validate).
  • it's also confusing and non-standard to internally import a sub-package at the root package level and change its name: e.g. synthesize -> synth and simulate -> simul. I kept thinking I missed an import statement with aliases, and I wasn't sure if there was something else going on with the code organization that I didn't know about. I would strongly suggest removing these internal aliases. If they are just too long, rename the module itself.

  • installation

  • documentation

  • functionality / presentation
    I tested by running notebooks, mostly.
    I kept notes for the key notebooks as I read through them. This starts to get quite nitpicky, please know I am trying to help with presentation of material. Going to add here for now but I could move this to individual issues.

    • quickstart
      • the function load_images in the notebook did not work for me at first
      • I read the function but it uses op -- I would suggest making the code as readable as possible by not abbreviating and used absolute names instead, esp for standard library functions; code is read much more than it's written
      • The problem was that the notebook expects to be run from inside notebook and so it has a hardcoded path
        • a fix for this would be to run notebooks from root
        • and/or to include images in the package as discussed in (link issue)
      • I would also suggest re-writing this with pathlib; pathlib.Path().exists()
      • and raising an explicit error like "file does not exist: {}"
  • packaging guidelines

    • pyproject.toml
      • I would change trove classifiers
        • the package is not in beta annymore if you are using in the lab
        • you can also specify specific pythons: 3.7, etc
      • You probably want to raise the minimum python requires and adhere to SPEC0 since you are depending on core scientific python libraries
      • right now core packages are at 3.10-3.12, but if you need numba then you might need to raise it to 3.9 and test on 3.9-3.11 since numba is not out for 3.12 just yet

Reading notes

conceptual introduction
  • ""> Synthesis is a framework for exploring models" -- what kind of models? (Answer: cognitive/neural!)

  • index

  • "descarded" -> "discarded"

quickstart
  • "Models can be really simple, as this demonstrates."
    • I think you want to say something here like "Models can be really simple as we show in the next session"
    • "It needs to inherit torch.nn.Module and just needs two methods: init (so it’s an object) and forward (so it can take an image)."
      • I would explicitly say here "In plenoptic, all models are implemented using the library PyTorch. This means that every model must subclass the torch.nn.Module class and implement forward"
        • At the risk of sounding even more pedantic then usual, I think you don't strictly need an __init__ method -- if you write class MyModel(torch.nn.Module) then you will already have Module's __init__ method
  • As per API notes above, I would suggest moving plenoptic.simulate.canonical_computations.filters to plenoptic.filters or even to plenoptic if it is frequently used. If you don't want to move code around you could just add a filters package and import there from the lengthy namespace
  • explain in models why inputs are 4d, like you do in conceptual -- define the dimensions
  • suggest adding headings throughout quickstart
    • "Stimuli"
    • "Models"
    • "Synthesis"
      • state briefly here how we optimize / compute loss, since this is key concept. Link to relevant papers + additional tutorial/how-to
    • "Turn off gradient"
      • If this function is used frequently why not have it at package level? plenoptic.remove_grad
      • Is it absolutely necessary to detach all parameters in the model? Doesn't this limit the ability to use the library with arbitrary models, because a user would be surprised to find that some model they trained and wanted to continue training suddenly had all the parameters detached? Would it be sufficient to just set requires_grad of the parameters to False, e.g. by calling Model.eval()? Or what about using a context manager inside the Metamer class that sets requires_grad of model parameters to false just during the optimization step?
    • "Metamers"
    • "Eigendistortions"
  • It was really unclear to me that we start with a randomly-initialized image. The part of the quickstart that uses the Marie Curie image makes this clearer, but I would say it explicitly in the first part
    • I would also explicitly say something like "at every step of optimization we compare the image, that begins as randomly initialized, with a representation provided by the model, and we use the loss function to minimize the difference between the representation of the metamer and the representation of the reference image

@sneakers-the-rat
Copy link
Collaborator

Excellent, thank you David! @DanielaPamplona checking in if I can be of any assistance, or if you have a rough estimate of timeline? No rush! It looks like the authors have a few things to work on in the meantime :)

@DanielaPamplona
Copy link

DanielaPamplona commented Jan 19, 2024 via email

@sneakers-the-rat
Copy link
Collaborator

Please take your time to get well, I only ask so we keep each other appraised, not trying to rush you!

@billbrod
Copy link
Author

Thanks all! I've got a workshop in 2 weeks (for a different software package) that I'm franticly preparing for, so I most likely won't make much progress on these revisions before then. So no rush Daniela!

@DanielaPamplona
Copy link

Hi,
So here is my review. Sorry for the delay.


Legend:
Y = Yes
N = No
~ = Not applicable
+/- = more or less/ needs improvement

Package Review

Please check off boYes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • [Y] As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • [Y] A statement of need clearly stating problems the software is designed to solve and its target audience in README.
  • [Y] Installation instructions: for the development version of the package and any non-standard dependencies in README.
  • [~] Vignette(s) demonstrating major functionality that runs successfully locally.
  • [Y] Function Documentation: for all user-facing functions. [Did not search everything, random sampled 50% and it was always documented]
  • [+/-] Examples for all user-facing functions.[Did not search everything, random sampled 50% and sometimes was missing.]
  • [Y] Community guidelines including contribution guidelines in the README or CONTRIBUTING.
  • [Y] Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a pyproject.toml file or elsewhere.

Readme file requirements
The package meets the readme requirements below:

  • [Y] Package has a README.md file in the root directory.

The README should include, from top to bottom:

  • [Y] The package name
  • Badges for:
    • [Y] Continuous integration and test coverage,
    • [Y] Docs building (if you have a documentation website),
    • [N] A repostatus.org badge,
    • [Y] Python versions supported,
    • [Y] Current package version (on PyPI / Conda).

NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)

  • [Y] Short description of package goals. [I would emphatize that the fact that the package is not only to generate images, but also to analyse them. Also use the term "generate/generative" that is more appealing].
  • [Y] Package installation instructions.
  • [~] Any additional setup required to use the package (authentication tokens, etc.)
  • [+/-] Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
  • [Y] Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear) [The quick start is not that "brief"]
  • [N] Link to your documentation website.
  • [~] If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
  • [Y] Citation information

Usability

Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole.
Package structure should follow general community best-practices. In general please consider whether:

  • [+/-] Package documentation is clear and easy to find and use. [The documentation webpage is not clear. See suggestion below]
  • [ +] The need for the package is clear
  • [ +/-] All functions have documentation and associated examples for use[Did not search everything, random sampled 50% and sometimes was missing.]
  • [Y] The package is easy to install

Functionality

  • [Y] Installation: Installation succeeds as documented.
  • [Y] Functionality: Any functional claims of the software been confirmed.
  • [ ~] Performance: Any performance claims of the software been confirmed.
  • Automated tests:
    • [Y] All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install. [Did not make automated tests, but passed all tests]
    • [Y] Tests cover essential functions of the package and a reasonable range of inputs and conditions.
  • [Y] Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
  • Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
    A few notable highlights to look at:
    • [Y] Package supports modern versions of Python and not End of life versions.
    • [ Y] Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

For packages also submitting to JOSS

  • [~] The package has an obvious research application according to JOSS's definition in their submission requirements.

Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.

The package contains a paper.md matching JOSS's requirements with:

  • [N] A short summary describing the high-level functionality of the software
  • [N] Authors: A list of authors with their affiliations
  • [N] A statement of need clearly stating problems the software is designed to solve and its target audience.
  • [N] References: With DOIs for all those that have one (e.g. papers, datasets, software).

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing:
8H

Review Comments

plenoptic is a python package for the generation and analysis of model-based visual stimuli.
The package is helpful and it contributes for better and FAIR science, and it is specially useful for researchers that are not experts in programming.
I found the package well-organized and clean. Please see the "checklist". Some things can be improved, especially on the documentation webpage. In general, I think the package is almost ready to be "published", the comments below are minor and easy to fix.

  • README:

    • Too many times written "help", you can use support, contribute, or any other synonym. I also think you should refer to the analysis/interpretation of the models part here.
    • On the GitHub page, some images have transparent background, so they look strange, like this one:
      Screenshot from 2024-01-04 18-00-38
  • Documentation webpage:https://plenoptic.readthedocs.io/
    Observations on the form

    1)The documentation webpage is confusing and not well organized, but it can be easily improved. A possible organization would be:

      ->Presentation
          -> Introduction
          -> Installation
          -> Quick start
          -> Citation Guide
    
      ->Modules
          -> Metric
          -> Simulate
          -> Synthesize
          -> tools
          -> Display and animate functions
          
      ->Tutorials and examples
          -> Eigendistortions 
          ...
          -> Reproducing Wand and Simoncelli
          -> Extending existing synthesis objects
          
      ->Advanced Usage
          -> Model requirements
          -> Synthesis object design
          -> Extending...
          -> API
          -> Tips and tricks
          -> Reproducibility
    

    In the section "Modules" it would be presented the general goal and content of the module and, eventually, some code snap-shots. Not all the modules need to be present, only the most relevant and/or difficult to use. The code snap-shots can be from the tutorials and examples already existing, In addition, there can be links between the modules and tutorial sections
    In the section "Tutorials and examples" would be all the tutorials already on the website.

    1. The tutorials/examples are too verbose. I would prefer something sharper. For instance:

      • In " Extending existing synthesis objects" it is written:

      "Once you are familiar with the existing synthesis objects included in plenoptic, you may wish to change some aspect of their function. For example, you may wish to change how the po.synth.MADCompetition initializes the MAD image or alter the objective function of po.synth.Metamer. While you could certainly start from scratch or copy the source code of the object and alter them directly, an easier way to do so is to create a new sub-class: an object that inherits the synthesis object you wish to modify and over-writes some of its existing methods."

      It could be written (this is an example, it is not to be taken literally):

      Functions of synthesis objects can be changed without altering the source code directly. For that, it is just necessary to create a new sub-class and over-write the existing methods. Bellow follows an example of changing the po.synth.MADCompetition initialization with the MAD image and the objective function of po.synth.Metamer.

      • The "Eigendistortions" section starts with "In this tutorial we will cover:...", I think this is a great way of starting, and all the tutorials should start like that.

    Observations on the contents
    * Installation: One should not mix pip and conda.
    * Installation: There is a problem when using plenoptic with jupyter notebook, see issue: problem when installing with jupyter notebook  LabForComputationalVision/plenoptic#242
    * Model requirements: Numerize instead of itemize, link to example. The citation method is not consistent with the one used in conceptual information.
    * Quick start: it is not really quick. It could have finished after cell 9 of the code.
    * Code: many modules are initialized by importing all the functions from the submodules, as for instance in the /plenoptic/tools/init.py it is written:

              from .data import *
              from .conv import *
              from .signal import *
              from .stats import *
              from .display import *
              from .straightness import *
    
              from .optim import *
              from .external import *
              from .validate import remove_grad
    
              from . import validate
    
          You can use Lasy_loader (https://pypi.org/project/lazy_loader/) to automatize de process.
    
      * If "all synthesis methods will raise a ValueError if given a model with any learnable parameters", then the helper function that you provide should be run automatically when a model is created.
    

For now, that's it! I will look in more detail at the tutorials in the second round, if there is one.

Cheers!

@billbrod
Copy link
Author

Thanks Daniela! Like I said, it'll be a bit before i can dedicate time to most of these, but I wanted to try and solve the jupyter issue. However, I can't reproduce the issue and I'm not sure how to fix this -- @NickleDave or @sneakers-the-rat do you have any ideas?

@sneakers-the-rat
Copy link
Collaborator

Thank you so much to our reviewers for the amount of time and care they've given to this package! There are a lot of very helpful suggestions here, and it seems relatively clear to me which of these are review blockers and which are just tips, so I think we have a clear path forward once @billbrod has some time to address things. Please take your time responding to the issues, and also feel free to pop back in here if anything is unclear.

I took a look at the jupyter issue and think there are a few possible routes there - specify -c conda-forge, reinstalling requests/charset-normalizer, or specifying jupyterlab seem like they would all fit the bill.

I'll check back in a few weeks to see how progress is going, thanks again to the reviewers for all their work <3

@NickleDave
Copy link
Contributor

NickleDave commented Mar 1, 2024

Hi @sneakers-the-rat and @DanielaPamplona, I'm just commenting here for @billbrod that he is on paternity leave, as of last night when his wife went into labor!

So I think we will need to put this review "pending maintainer response" while he's out.
@sneakers-the-rat I will abuse my power as editor and add the label, just to save you the work 😁

edit: I put "on hold" but that's the wrong label. Maybe we shouldn't add a label here--I'm just trying to give context for somebody who wants the overview of where reviews are at. Happy to let the actual editor do whatever they think is appropriate

edit edit: for context, I needed to message Billy about other stuff related to US-RSE, that's how I found out and told him I would comment here since clearly he's busy with other things

@NickleDave NickleDave added on-hold A tag to represent packages on review hold until we figure out a bigger issue associate with review ⌛ pending-maintainer-response and removed on-hold A tag to represent packages on review hold until we figure out a bigger issue associate with review labels Mar 1, 2024
@sneakers-the-rat
Copy link
Collaborator

Thank you for the update :). Congratulations @billbrod and please take the time you need and feel no hurry. We're here to help your work, and that certainly doesn't include rushing you away from your newborn. Best of luck and let us know when you're ready!

@DanielaPamplona
Copy link

Congratulations! Have fun with the baby!

@billbrod
Copy link
Author

Hi all, just an update that I'm back from parental leave and will start working on this again, though my overall capacity is still reduced.

@DanielaPamplona
Copy link

DanielaPamplona commented May 29, 2024 via email

@sneakers-the-rat
Copy link
Collaborator

Fabulous!!! Hope you and the family are well. Just letting you know im on strike and will not be working until the strike is resolved. Whether someone else picking up the review is crossing a picket line is hard to say because this is volunteer work not for my employer, but I wont be proceeding with editorial duties until the University of California resolves its unfair labor practices with UAW 4811.

@DanielaPamplona
Copy link

DanielaPamplona commented Jun 1, 2024 via email

@billbrod
Copy link
Author

billbrod commented Jun 3, 2024

Good luck with the strike!

@lwasser
Copy link
Member

lwasser commented Jun 6, 2024

hi colleagues! Wow reading through this review so far and seeing the incredible level of support that you ALL have for each other really warmed my heart ❤️ 💟 in a special way! Thank you all for supporting life changes, new kiddos (congratulations on the little one @billbrod !), political challenges and standing your ground, and life business - all in a single review. ✨ you all are incredible.

We at pyOpenSci are dedicated to supporting our community and our volunteer editors like the incredible @sneakers-the-rat . SO we are going to look into finding a second editor to support moving this review forward (even if it's slowly @billbrod given home life responsibilities!! ) so Jonny can focus on the important issues happening on their campus now.

Can you all give us a sense of where things are? it looks to me like both reviews are in but @billbrod has some work to do and might be below normal work capacity given more important life responsibilities. I'm just trying to figure out how quickly we want to get a new editor on board! @NickleDave perhaps you can give me a download as it does say the issue is on hold for now. do we want to just pause on this review and check back in a month or two? or should we onboard someone now(ish)?

thank you all!! i appreciate you!

@billbrod
Copy link
Author

billbrod commented Jun 7, 2024

I think we can probably check back in in a month or two -- I'm helping run a workshop in 2 weeks and then I have a presentation at a summer school 2 weeks after that, so all that paired with baby responsibilities means I probably won't be able to focus on the review until the 2nd week or so of July.

@lwasser
Copy link
Member

lwasser commented Jun 7, 2024

fantastic! thanks @billbrod the scipy meeting is in mid july. how about we check back in at the end of july. this will also give us some time to find an editor to wrap things back up here. @kierisi eventually we will have a bot to notify us in x weeks. but for now - this is the review that we could use a second editor for. Let's plan to check back in on it after scipy which would be mid/ late july -- perhaps the 22nd of july?

@kierisi
Copy link

kierisi commented Jun 7, 2024

@lwasser sounds good!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: under-review
Development

No branches or pull requests

6 participants