Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: qnm: A Python package for calculating Kerr quasinormal modes, separation constants, and spherical-spheroidal mixing coefficients #1683

Closed
whedon opened this issue Aug 27, 2019 · 68 comments

Comments

@whedon
Copy link
Collaborator

commented Aug 27, 2019

Submitting author: @duetosymmetry (Leo C. Stein)
Repository: https://github.com/duetosymmetry/qnm
Version: 0.4.0
Editor: @xuanxu
Reviewer: @mattpitkin, @duncanmmacleod
Archive: 10.5281/zenodo.3459790

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/85532a74baaa67a24518de1365f1bcf5"><img src="https://joss.theoj.org/papers/85532a74baaa67a24518de1365f1bcf5/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/85532a74baaa67a24518de1365f1bcf5/status.svg)](https://joss.theoj.org/papers/85532a74baaa67a24518de1365f1bcf5)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@mattpitkin & @duncanmmacleod, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @xuanxu know.

Please try and complete your review in the next two weeks

Review checklist for @mattpitkin

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@duetosymmetry) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @duncanmmacleod

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@duetosymmetry) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Aug 27, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @mattpitkin, @duncanmmacleod it looks like you're currently assigned to review this paper 🎉.

⭐️ Important ⭐️

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Aug 27, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Aug 27, 2019

@xuanxu

This comment has been minimized.

Copy link
Member

commented Aug 27, 2019

@whedon remind @duncanmmacleod in two weeks

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Aug 27, 2019

Reminder set for @duncanmmacleod in two weeks

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 2, 2019

Hi @duetosymmetry, one of the requirements of the review to check that the documentation contains

Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Could you add a sentence or two stating how users can contribute/leave feedback, e.g., via Issue or PRs on the package's git repo?

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 2, 2019

@duetosymmetry in the bibtex file can you add a journal entry to the Isi et al, paper, so that the arXiv number shows up, e.g.:

journal = "arXiv:\href{https://arxiv.org/abs/1905.00869}{1905.00869}"

(or course this can be updated if the actual journal article comes out in the next couple of weeks)

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 2, 2019

@duetosymmetry do you have any plans to add any unit testing, and or a CI builder for the package? These aren't requirements for acceptance in JOSS, but I just wanted to check. I've can run the example in the documentation, but is there any example case for which values that the code can produce that can be checked against any independent published results?

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 2, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Sep 2, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Sep 2, 2019

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 2, 2019

Thanks for the feedback, @mattpitkin .

  1. Added short contribution guidelines in duetosymmetry/qnm@14e7e7a
  2. I didn't see the missing arXiv number when I was making with the JOSS template locally -- there must be some subtle difference between the template I fetched and the one that whedon is using. Fixed in duetosymmetry/qnm@ccb51bc
  3. I would like to add CI testing, e.g. against Emanuele Berti's precomputed tables, but I'm not sure how best to do this. To make this happen would require writing a test that would download his tarballs, which is ~100Mb, currently hosted on a JHU server. If an installation of qnm uses qnm.download_data() to speed up the testing, then that would download another ~100Mb, from my web server. Is there a better approach? Maybe pre-selecting a handful of combinations of (s,l,m,n,a), and storing those statically in a test? Is there a python package with a simple example of testing infrastructure that you might point me to? For now, there are only doctests that can be performed by going into docs and performing make doctest.
@mattpitkin

This comment has been minimized.

Copy link

commented Sep 3, 2019

  1. Added short contribution guidelines in duetosymmetry/qnm@14e7e7a

Excellent, thanks. I'll tick off the appropriate review check box.

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 3, 2019

  1. I didn't see the missing arXiv number when I was making with the JOSS template locally -- there must be some subtle difference between the template I fetched and the one that whedon is using. Fixed in duetosymmetry/qnm@ccb51bc

Thanks, I can see the arXiv number in the generated pdf now.

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 3, 2019

  1. I would like to add CI testing, e.g. against Emanuele Berti's precomputed tables, but I'm not sure how best to do this. To make this happen would require writing a test that would download his tarballs, which is ~100Mb, currently hosted on a JHU server. If an installation of qnm uses qnm.download_data() to speed up the testing, then that would download another ~100Mb, from my web server. Is there a better approach? Maybe pre-selecting a handful of combinations of (s,l,m,n,a), and storing those statically in a test? Is there a python package with a simple example of testing infrastructure that you might point me to? For now, there are only doctests that can be performed by going into docs and performing make doctest.

It's now pretty simple to integrate CI testing in Travis for github projects. It's not a huge issue for these tests to download 100Mb data sets each time - often you can get then to CI to download a Docker image for the testing each time, which might be larger. An example of the CI configuration for one of my own package's can be found here. This uses pytest to run a set of test scripts found in the test directory.

At it's simplest you could add a .travis.yml file to the repo with something like:

language: python
sudo: required
dist: xenial
branches:
  only:
    master
python:  # run test on both Python 3.5 and 3.6
  - "3.6"
  - "3.7"
install:  # install your package and requirements
  - pip install --upgrade pip
  - pip install -r requirements.txt
  # install packages for testing
  - pip install pytest
  # build the qnm package
  - pip install -e .
script:
  # run the test script
  - pytest

Then you could create a test directory (which will be automatically looked for by pytest) and add a file, e.g., test_qnm.py containing something like:

import pytest
import qnm
import numpy as np

class test_qnm(object):
    """
    A class for testing aspects of the qnm module.
    """
    @classmethod
    def setup_class(cls):
        """
        Download the data when setting up the test class.
        """
        qnm.download_data()

    def test_example(self):
        """
        An example of a test
        """
        grav_220 = qnm.modes_cache(s=-2,l=2,m=2,n=0)
        omega, A, C = grav_220(a=0.68)
        assert omega == (0.5239751042900845 - 1j * 0.08151262363119974j)

    def test_example2(self):
        from qnm.radial import leaver_cf_inv_lentz_old, leaver_cf_inv_lentz
        old = leaver_cf_inv_lentz_old(omega=.4 - 0.2j, a=0.02, s=-2, m=2, A=4.+0.j, n_inv=0)
        new = leaver_cf_inv_lentz(omega=.4 - 0.2j, a=0.02, s=-2, m=2, A=4.+0.j, n_inv=0)
        assert np.all([old[i] == new[i] for i in range(3)])

Then, if you sign-up for a Travis CI account, you can activate the CI for the qnm git repo and the CI should be run each time you commit a change. Then you can add a badge to your repo 😄 📛

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 4, 2019

Thanks for the quick tutorial, @mattpitkin . I just integrated with travis successfully! I'll get to writing real tests, but probably not until tomorrow.

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Sep 10, 2019

👋 @duncanmmacleod, please update us on how your review is going.

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 12, 2019

Update for @mattpitkin: I started building some comparison code between Berti's tables and my code. Unfortunately some of the differences |Berti - me| are greater than the error estimates that my package is reporting. So right now I'm not sure I can trust Emanuele's numbers for these automated tests.

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 15, 2019

Update again: I was able to perform an offline comparison again Greg Cook's tabulated data, and the comparison is very good. Cook's data is on Zenodo, but I'm not going to make an online automated test based on this data, because his dataset is 1 gig. I will put automated testing of the data quality on the back burner for now until I come up with a good solution. I can build some regression tests, which should not be very hard, but this is not a very high priority for me at the moment.

@duncanmmacleod

This comment has been minimized.

Copy link

commented Sep 16, 2019

@duetosymmetry, apologies for the delay in reviewing this work.

Overall this looks very nice, I am particularly pleased with the minimal dependencies and the used of JIT compilation to cache outputs. I have posted the following issues to the target repo:

Once these are addressed (or plans drawn up to address them in the future), I will be happy to approve this.

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 16, 2019

@duetosymmetry thanks for your updates. If you could provide a script, and just a short description in the docs, of the offline comparison to a couple of cases in Greg Cook's data table that would satisfy me.

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 20, 2019

Hi folks, I've created a coLaboratory notebook that fetches Greg Cook's data from Zenodo, loads the qnm package, does comparisons between the two, and plots the differences in frequencies and separation constants. You can check it out here.
Any suggestions for sharing this notebook? I don't consider it as part of the package. I could make a vanilla jupyter notebook and stick that in some other repo. Or just leave this on coLab, then others don't have to install anything to verify the claims of the package.

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 20, 2019

PS Have already started working on using pathlib (today I learned!) and the cache location, probably won't get a chance to fix everything up until this weekend. Too busy with teaching at the moment.

@mattpitkin

This comment has been minimized.

Copy link

commented Sep 20, 2019

Any suggestions for sharing this notebook? I don't consider it as part of the package. I could make a vanilla jupyter notebook and stick that in some other repo. Or just leave this on coLab, then others don't have to install anything to verify the claims of the package.

I'd suggest just adding it to the qnm repo in the notebooks directory, but if you'd prefer not to then putting it in its own standalone repo (and including with that a link to the CoLab notebook) is probably the best - hopefully a gitlab repo would be a bit more "permanent" that a CoLab notebook.

Whatever you chose, I'd suggest adding a link to, and short description of, the notebook in the README.md and/or docs of qnm.

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Sep 30, 2019

OK. 0.4.0 is the version.

@xuanxu

This comment has been minimized.

Copy link
Member

commented Sep 30, 2019

Ok, everything looks good @duetosymmetry, here are the next steps:

  • Please archive the latest release in Zenodo
  • Check the Zenodo deposit has the correct metadata: title and author name should match the paper; you may also add your ORCID.

Once you do that please report here the Zenodo DOI.

@duetosymmetry

This comment has been minimized.

Copy link

commented Sep 30, 2019

@xuanxu done! Here is the Zenodo DOI: 10.5281/zenodo.3459790.

@xuanxu

This comment has been minimized.

Copy link
Member

commented Sep 30, 2019

@whedon set 10.5281/zenodo.3459790 as archive

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Sep 30, 2019

OK. 10.5281/zenodo.3459790 is the archive.

@xuanxu

This comment has been minimized.

Copy link
Member

commented Sep 30, 2019

All ready for publication 🎉
Pinging @openjournals/joss-eics for final acceptance.

@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

Hi @duetosymmetry, just a few edits necessary in the paper:

  • In the first sentence of the second paragraph, remove the [ ] around the reference to get the an "Author et al. (2001)"-style citation.
  • In the first sentence of the third paragraph, I'd suggest changing e.g. [@Dolan:2009nk] to e.g., as given by @Dolan:2009nk (or something similar)
  • later in the third paragraph, change method of Leaver [@Leaver:1985ax] to method of @Leaver:1985ax
  • First sentence of fourth paragraph: change to A refinement of this method was put forth by @Cook:2014cta (see also Appendix A of @Hughes:1999bq).

(I'm going to stop commenting on those kind of changes to in-text citations, but please fix the rest of those.)

  • In the summary, please replace the in-text links to the mentioned codes (London, Berti) with references to standard citations
  • end of 3rd paragraph in Summary: add a comma after e.g.

Last: this isn't a required edit, but a suggestion: you have a lot of acronyms introduced in the first paragraph, and while those may be standard in your field, from a general reader's perspective they don't really save much space and instead just make reading the text a bit harder. So, you might consider removing acronyms for most terms unless a general reader would understand them. (Though I admit I'm guilty of this plenty myself.)

@duetosymmetry

This comment has been minimized.

Copy link

commented Oct 1, 2019

@kyleniemeyer thanks for the suggestions! I have implemented many of them. Regarding the links to London's git repo, and Berti's web page: These resources do not have some versions of record (e.g. journal article or Zenodo archive), neither has a DOI. How should I cite these resources?

@duetosymmetry

This comment has been minimized.

Copy link

commented Oct 1, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

@duetosymmetry thanks! for the software refs, a typical software citation should include:

  • author(s)
  • title + version number
  • year of release associated with the version
  • DOI associated with version, if possible
  • URL directing to landing page

If only some of these are available, that's ok—not many have adopted archiving with DOIs yet.

@duetosymmetry

This comment has been minimized.

Copy link

commented Oct 1, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

@duetosymmetry

This comment has been minimized.

Copy link

commented Oct 1, 2019

@kyleniemeyer Here's the best I could accomplish. Neither of these codes have releases or version numbers. For London's, I included the short hash of the latest (2017) commit. For Berti's, I found the earliest appearance of the file on archive.org (2010), and it is identical to the presently served version.

@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

@duetosymmetry I think those look fine, thanks!

@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

@whedon accept

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

Attempting dry run of processing paper acceptance...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019


OK DOIs

- 10.1109/MCSE.2011.37 is OK
- 10.1145/2833157.2833162 is OK
- 10.1088/0264-9381/26/16/163001 is OK
- 10.1103/PhysRevD.90.124021 is OK
- 10.1098/rspa.1985.0119 is OK
- 10.1086/180849 is OK
- 10.1086/158109 is OK
- 10.1103/PhysRevLett.116.221101 is OK
- 10.1103/PhysRevLett.123.111102 is OK
- 10.1086/152444 is OK
- 10.1088/0264-9381/26/22/225003 is OK
- 10.1103/PhysRevD.61.084004 is OK
- 10.1103/PhysRevD.90.064012 is OK

MISSING DOIs

- None

INVALID DOIs

- None
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

Check final proof 👉 openjournals/joss-papers#994

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#994, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true
@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

@whedon accept deposit=true

1 similar comment
@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

@whedon accept deposit=true

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

Doing it live! Attempting automated processing of paper acceptance...
@whedon whedon added the accepted label Oct 1, 2019
@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 openjournals/joss-papers#997
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01683
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@kyleniemeyer

This comment has been minimized.

Copy link

commented Oct 1, 2019

Congrats @duetosymmetry on your article's publication in JOSS!

Many thanks to @xuanxu for editing, and @mattpitkin & @duncanmmacleod for reviewing, this submission.

@whedon

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 1, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01683/status.svg)](https://doi.org/10.21105/joss.01683)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01683">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01683/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01683/status.svg
   :target: https://doi.org/10.21105/joss.01683

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.