Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: pygtc: beautiful parameter covariance plots (aka. Giant Triangle Confusograms) #46

Closed
15 of 16 tasks
whedon opened this issue Jul 28, 2016 · 29 comments
Closed
15 of 16 tasks
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Jul 28, 2016

Submitting author: @SebastianBocquet (Sebastian Bocquet)
Repository: https://github.com/SebastianBocquet/pygtc
Version: 0.1.1
Editor: @arfon
Reviewer: @dfm
Archive: https://dx.doi.org/10.5281/zenodo.159225

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/fac5017b39292e539933d12936615f4a"><img src="http://joss.theoj.org/papers/fac5017b39292e539933d12936615f4a/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/fac5017b39292e539933d12936615f4a/status.svg)](http://joss.theoj.org/papers/fac5017b39292e539933d12936615f4a)

Reviewer questions

Conflict of interest

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (such as being a major contributor to the software).

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (0.1.1)?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: Have any performance claims of the software been confirmed?

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g. API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

Paper PDF: 10.21105.joss.00046.pdf

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g. papers, datasets, software)?
@whedon whedon added the review label Jul 28, 2016
@arfon
Copy link
Member

arfon commented Jul 28, 2016

/ cc @openjournals/joss-reviewers - would anyone be willing to review this submission?

If you would like to review this submission then please comment on this thread so that others know you're doing a review (so as not to duplicate effort). Something as simple as :hand: I am reviewing this will suffice.

Reviewer instructions

  • Please work through the checklist at the start of this issue.
  • If you need any further guidance/clarification take a look at the reviewer guidelines here http://joss.theoj.org/about#reviewer_guidelines
  • Please make a publication recommendation at the end of your review

Any questions, please ask for help by commenting on this issue! 🚀

@pjotrp
Copy link

pjotrp commented Aug 1, 2016

@dfm: do you mind reviewing this submission?

@dfm
Copy link

dfm commented Aug 1, 2016

✋ sure, I'm on it.

@arfon
Copy link
Member

arfon commented Aug 1, 2016

@dfm

I'll need to add you to the @openjournals/joss-reviewers team so you have the correct permissions to update the checklist at the top of the issue - sound OK?

@dfm
Copy link

dfm commented Aug 1, 2016

Yep

@arfon
Copy link
Member

arfon commented Aug 1, 2016

@dfm - you should see the invite at the top of this page: https://github.com/openjournals

@dfm
Copy link

dfm commented Aug 1, 2016

General checks

Version: Does the release version given match the GitHub release (0.1.1)?

There is no archived release linked from the documentation or readme and there is no DOI given in the paper.

Functionality

Installation: Does installation proceed as outlined in the documentation?

The installation instructions are missing a list of prerequisites. Does scipy really need to be a hard dependency?

Documentation

A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?

There is a vague statement of need in the README but it's not very clear and it is completely missing from the docs. The contents of the paper (and maybe more discussion) should be included in the docs. There is also no discussion of how to interpret these plots. For example, the (incorrect, I would argue!) choice of contour levels should be stated and justified.

Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.

In principle, the pip command could install all of the dependencies but they should be discussed in the installation instructions.

Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).

There is a demo notebook included in the repository but the docs have no discussion of the usage or even a link to the demo. I think that a gallery of examples or at least a tutorial is required.

Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g. API method documentation)?

The entry point is documented by its docstring and an auto-generated docs entry. This isn't really sufficient to constitute documentation.

Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?

No tests are currently included. It might be worth checking out the tests from corner.py for an example of how this could be implemented for similar software.

Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Missing. There should (at least!) be a link to the GitHub repository from the docs.

Software paper

A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?

This is well described but the paper incorrectly states that corner.py can't overplot multiple samples. This has never been true! 😉 The claim is also made that the resulting plots are compatible with different journals and a list of options are given but it's not clear to me why this is necessary and why the dpi enters - no one should be submitting rasterized figures to the journals!

References: Do all archival references that should have a DOI list one (e.g. papers, datasets, software)?

The DOI for numpy/scipy is 10.1109/MCSE.2011.37 and matplotlib is 10.1109/MCSE.2007.55. ref

Other comments

Packages should never alter the matplotlib rc. The fact that plots made after calling pygtc are different from the same plots made before is not ok.

@SebastianBocquet
Copy link

Dear All,

below you will find our response to the referee report, highlighted in
green.

Best regards,

Sebastian Bocquet


Authors' response:

We would like to thank Dan for carefully reviewing our software package and
paper. He points out several important issues. We have addressed each of
the raised concerns, and made changes as described below.

General checks

Version: Does the release version given match the GitHub release (0.1.1)?

There is no archived release linked from the documentation or readme and
there is no DOI given in the paper.

We have version bumped to 0.2.0 to generate a Zenodo DOI, which is:

10.5281/zenodo.154122

We added the link to the readme, documentation, and paper.

Functionality

Installation: Does installation proceed as outlined in the documentation?

The installation instructions are missing a list of prerequisites. Does
scipy really need to be a hard dependency?

We have made scipy optional, updated the installation instructions, and
defined minimum (and maximum) versions (to avoid forced upgrades of numpy
and/or matplotlib).

Documentation

A statement of need: Do the authors clearly state what problems the
software is designed to solve and who the target audience is?

There is a vague statement of need in the README but it's not very clear
and it is completely missing from the docs. The contents of the paper (and
maybe more discussion) should be included in the docs.

We have updated both the readme.rst in the repo, as well as the
documentation to address this.

There is also no discussion of how to interpret these plots. For example,
the (incorrect, I would argue
http://corner.readthedocs.io/en/latest/pages/sigmas.html!) choice of
contour levels should be stated and justified.

We added a note about confidence levels for 2d contours to the
documentation. As posterior distributions are not necessarily Gaussian, we
argue that there is no right or wrong in choosing Gaussian “sigma” or 68%
and 95% confidence levels, as long as it is clear which definition is used.
However, we did add the option to show the “sigma” levels.

Installation instructions: Is there a clearly-stated list of dependencies?
Ideally these should be handled with an automated package management
solution.

In principle, the pip command could install all of the dependencies but
they should be discussed in the installation instructions.

They are now clearly stated, along with min/max versions to avoid pip
forcing an upgrade of packages that don’t need to be.

Example usage: Do the authors include examples of how to use the software
(ideally to solve real-world analysis problems).

There is a demo notebook included in the repository but the docs have no
discussion of the usage or even a link to the demo. I think that a gallery
of examples or at least a tutorial is required.

The notebook has been extended to the documentation, where it is the basis
of a tutorial for how to use pygtc. We also added a second example showing
off pygtc with some publicly available cosmological data.

Functionality documentation: Is the core functionality of the software
documented to a satisfactory level (e.g. API method documentation)?

The entry point is documented by its docstring and an auto-generated docs
entry. This isn't really sufficient to constitute documentation.

We have added two tutorials to the documentation, which should be
sufficient to get a new user up and running quickly and demo all of the
major features. For a detailed description of the options, the docstring is
the logical place to turn, and we have carefully documented every option in
the primary user-facing function, and we give a description of each
argument in the helper functions (which are not meant to be user-facing).

Automated tests: Are there automated tests or manual steps described so
that the function of the software can be verified?

No tests are currently included. It might be worth checking out the tests
from corner.py https://github.com/dfm/corner.py for an example of how
this could be implemented for similar software.

We understand that from the development perspective an automated test could
be useful to make sure something isn’t broken when a bug is fixed. However,
for a user looking to verify the utility of the package, a test that
compares two images to see if they are the same at the pixel level may not
be terribly useful. This is because anyone may have a different rcParams
default, there is no way we are going to exhaustively override every
possible rcParam, and it only takes one difference to generate an image
that isn’t pixel-for-pixel the same as the one we made on our system (i.e.,
they set their default font color to dark grey, and we don’t specify that
default). This scenario would generate a “failed” test, even though the
user is getting what they want.

A better indication of whether the package works is for the user to run the
included ipython notebook tutorial and simply look at the plots it
generates.

Community guidelines: Are there clear guidelines for third parties wishing
to 1) Contribute to the software 2) Report issues or problems with the
software 3) Seek support

Missing. There should (at least!) be a link to the GitHub repository from
the docs.

This has been fixed.

Software paper

A statement of need: Do the authors clearly state what problems the
software is designed to solve and who the target audience is?

This is well described but the paper incorrectly states that corner.py
can't overplot multiple samples. This has never been true!

We have removed the erroneous statement about corner.py.

The claim is also made that the resulting plots are compatible with
different journals and a list of options are given but it's not clear to me
why this is necessary and why the dpi enters - no one should be submitting
rasterized figures to the journals!

We agree that rasterized figures are basically the devil, which is why
pygtc saves to pdf! However, when one chooses a particular font size, that
font size is rendered relative to the size in inches of the figure, so now
we have to choose a size in inches so the font knows how big to be. At some
point a reference is required to get the relative size of the axes and the
fonts right, and so we use the dpi as a measuring stick. We believe that
the figure size should be set such that the font size is what you expect
(and what the journal wants) and that the figure may be inserted into the
journal without needing to tweak width/height settings in LaTex. Because of
this, we give you precisely what you want when you specify a custom font
size.

References: Do all archival references that should have a DOI list one
(e.g. papers, datasets, software)?

The DOI for numpy/scipy is 10.1109/MCSE.2011.37 and matplotlib is
10.1109/MCSE.2007.55. ref

These are cited correctly now.

Other comments

Packages should never alter the matplotlib rc. The fact that plots made
after calling pygtc are different from the same plots made before is not ok.

In hindsight, we couldn’t agree more. We have completely updated pygtc to
function without touching matplotlib rc (except for one case, see below).

The one case: When using matplotlib’s built in Tex parser to convert LaTex
expressions into math, the only place to specify the font family used for
the math is in the ‘mathtext.fontset’ rcParam. This is in contrast to any
other text in matplotlib, which keeps its own fontdict separate from the
rcParams. (We have opened issue #7107 over at matplotlib’s github repo to
notify them of this and they have added the fix to their 2.0 release
milestones. We will update pygtc again when matplotlib 2.0 comes out.)

To get around this, we change the ‘mathtext.fontset’ rcParam, save the
plot, revert the rcParam back to whatever it was, and then return the
figure to the caller. Unfortunately, in interactive mode (i.e., a notebook
or IPython), the figure will now render to the screen using whatever is in
the rcParams, and may be different from what was desired. We thus include a
flag that will turn off resetting back to original rcParams. If the user
specifies holdRC=True, then the ‘mathtext.fontset’ rcParam will not be
reset back to the default, allowing the caller to render the figure
properly.

On Mon, Aug 1, 2016 at 1:50 PM, Dan Foreman-Mackey <notifications@github.com

wrote:

General checks

Version: Does the release version given match the GitHub release (0.1.1)?

There is no archived release linked from the documentation or readme and
there is no DOI given in the paper.
Functionality

Installation: Does installation proceed as outlined in the documentation?

The installation instructions are missing a list of prerequisites. Does
scipy really need to be a hard dependency?
Documentation

A statement of need: Do the authors clearly state what problems the
software is designed to solve and who the target audience is?

There is a vague statement of need in the README but it's not very clear
and it is completely missing from the docs. The contents of the paper (and
maybe more discussion) should be included in the docs. There is also no
discussion of how to interpret these plots. For example, the (incorrect, I
would argue http://corner.readthedocs.io/en/latest/pages/sigmas.html!)
choice of contour levels should be stated and justified.

Installation instructions: Is there a clearly-stated list of dependencies?
Ideally these should be handled with an automated package management
solution.

In principle, the pip command could install all of the dependencies but
they should be discussed in the installation instructions.

Example usage: Do the authors include examples of how to use the software
(ideally to solve real-world analysis problems).

There is a demo notebook included in the repository but the docs have no
discussion of the usage or even a link to the demo. I think that a gallery
of examples or at least a tutorial is required.

Functionality documentation: Is the core functionality of the software
documented to a satisfactory level (e.g. API method documentation)?

The entry point is documented by its docstring and an auto-generated docs
entry. This isn't really sufficient to constitute documentation.

Automated tests: Are there automated tests or manual steps described so
that the function of the software can be verified?

No tests are currently included. It might be worth checking out the tests
from corner.py https://github.com/dfm/corner.py for an example of how
this could be implemented for similar software.

Community guidelines: Are there clear guidelines for third parties wishing
to 1) Contribute to the software 2) Report issues or problems with the
software 3) Seek support

Missing. There should (at least!) be a link to the GitHub repository from
the docs.
Software paper

A statement of need: Do the authors clearly state what problems the
software is designed to solve and who the target audience is?

This is well described but the paper incorrectly states that corner.py
can't overplot multiple samples. This has never been true! 😉 The claim
is also made that the resulting plots are compatible with different
journals and a list of options are given but it's not clear to me why this
is necessary and why the dpi enters - no one should be submitting
rasterized figures to the journals!

References: Do all archival references that should have a DOI list one
(e.g. papers, datasets, software)?

The DOI for numpy/scipy is 10.1109/MCSE.2011.37 and matplotlib is
10.1109/MCSE.2007.55. ref https://www.scipy.org/citing.html
Other comments

Packages should never alter the matplotlib rc. The fact that plots made
after calling pygtc are different from the same plots made before is
not ok.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#46 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AI56flX_fAR6994bFkQxvzuhlfWXO-1xks5qbkAEgaJpZM4JW3nE
.

@dfm
Copy link

dfm commented Sep 18, 2016

@arfon: is it possible to update the software version number and rebuild the paper?

@SebastianBocquet: thanks for the responses – it's looking much better! I have a few minor comments remaining:

  1. The comment about "sigmas" has nothing to do with Gaussians. It has to do with the relationship between the contours that you draw in 2D and the quantiles that people tend to label in 1D. If you use 68% as a quantile in 1D then you shouldn't use the same in 2D because the contours will look broader than they should!! I agree that there's no reason to choose a specific number and encouraging people to say what the used is the best practice but I'd be careful about the default value that you choose. I'd be willing to make a large bet that when the vast majority of readers look at plots like this they interpret the levels as 1 and 2 "sigma" even if you say "68% contours" and they would be wrong. Also, "confidence" interval won't, in general, be the correct term – if this is meant to display the results of MCMC then these are actually credible regions, for example. With both of these points in mind, I'd update the terminology ("confidence level" -> "contour level", "Gaussian" -> None) in the discussion and in the docstring. (Sorry to harp on this so much but I've spent a lot of time thinking about this and correcting people's misconceptions about these types of plots over the years!)
  2. It's not actually true that adding version numbers to the setup.py will not force upgrades (and it's actually generally considered bad practice to put version numbers in the setup.py). If tests were included then it would be obvious which versions of the packages were actually required 😉
  3. I totally disagree on the point about tests. The point of tests for something like this is to confirm that the results of the sample plots are the same for many different versions of numpy and matplotlib (trust me, this is not a given – adding tests to corner.py found a few bugs like this but maybe I'm just a sloppier coder). I see that the reviewer guidelines say that I'm only allowed to "strongly encourage" you to implement tests so you can do whatever you want but it doesn't take long to set up travis and I hope you will consider it!

Otherwise, thanks for discussing and addressing all my comments and I think that this looks much better. After these minor comments have been addressed, I think we're good to go.

@tacaswell
Copy link

It is probably too late, but mpl has mpl.rc_context and mpl.style.context which both provide context managers for setting and restoring rcParams.

This is because anyone may have a different rcParams
default, there is no way we are going to exhaustively override every
possible rcParam,

This is actually pretty easy to do

mpl.rcdefaults()
mpl.style.use(...)

https://github.com/matplotlib/pytest-mpl and the image comparison tests in mpl both include a way to specify rcparams as part of the test and ensure they are properly cleaned up.

@tacaswell
Copy link

matplotlib/matplotlib#7107 xref to mpl issue

@arfon
Copy link
Member

arfon commented Sep 19, 2016

@arfon: is it possible to update the software version number and rebuild the paper?

@dfm - sure thing. Here's the new paper: 10.21105.joss.00046.pdf

@arfon arfon changed the title Submission: pygtc: beautiful parameter covariance plots (aka. Giant Triangle Confusograms) [REVIEW]: pygtc: beautiful parameter covariance plots (aka. Giant Triangle Confusograms) Sep 20, 2016
@arfon
Copy link
Member

arfon commented Sep 20, 2016

@whedon commands

@whedon
Copy link
Author

whedon commented Sep 20, 2016

Here are some things you can ask me to do:

# List all of Whedon's capabilities
@whedon commands

# Assign a GitHub user as the reviewer of this submission
@whedon assign @username as reviewer

# List the GitHub usernames of the JOSS editors
@whedon list editors

# List of JOSS reviewers together with programming language preferences and domain expertise
@whedon list reviewers

# Change editorial assignment
@whedon assign @username as editor

# Open the review issue
@whedon start review

🚧 Important 🚧

This is all quite new. Please make sure you check the top of the issue after running a @whedon command (you might also need to refresh the page to see the issue update).

@arfon
Copy link
Member

arfon commented Sep 21, 2016

@whedon assign @dfm as reviewer

@whedon
Copy link
Author

whedon commented Sep 21, 2016

OK, the reviewer is @dfm

@whedon whedon assigned dfm and arfon Sep 21, 2016
@SebastianBocquet
Copy link

Dear All,

below you will find our response to the second referee report.

Best regards,
Sebastian Bocquet


Thanks very much for the additional comments. We think our package is much
better as a result of this review and we are both glad we went through this
process. We copy the most recent comments here, and reply to them in green
text.

The comment about "sigmas" has nothing to do with Gaussians. It has to do
with the relationship between the contours that you draw in 2D and the
quantiles that people tend to label in 1D. If you use 68% as a quantile in
1D then you shouldn't use the same in 2D because the contours will look
broader than they should!! I agree that there's no reason to choose a
specific number and encouraging people to say what the used is the best
practice but I'd be careful about the default value that you choose. I'd be
willing to make a large bet that when the vast majority of readers look at
plots like this they interpret the levels as 1 and 2 "sigma" even if you
say "68% contours" and they would be wrong. Also, "confidence" interval
won't, in general, be the correct term – if this is meant to display the
results of MCMC then these are actually credible regions
https://en.wikipedia.org/wiki/Credible_interval, for example. With both
of these points in mind, I'd update the terminology ("confidence level" ->
"contour level", "Gaussian" -> None) in the discussion and in the
docstring. (Sorry to harp on this so much but I've spent a lot of time
thinking about this and correcting people's misconceptions about these
types of plots over the years!)

We agree that the discussion has nothing to do with Gaussians, and thank
you for pointing us to the correct use of the confidence/credible terms. We
have refactored the code and changed the variables (and keyword args)
gaussianConfLevels to sigmaContourLevels, and nConfidenceLevels to
nContourLevels. We also have also updated our note in the documentation to
be more precise regarding this. However, we are keeping our defaults set to
68%, 95%, 99%. This is because (at least in our field) we mostly see
contours plotted this way, as this is what getdist produces by default, and
we want to make switching over to pygtc as easy as possible.

It's not actually true that adding version numbers to the setup.py will not
force upgrades (and it's actually generally considered bad practice to put
version numbers in the setup.py
https://caremad.io/posts/2013/07/setup-vs-requirement/). If tests were
included then it would be obvious which versions of the packages were
actually required 😉

We have removed the version numbers from setup.py and placed them into a
requirements.txt file instead.

I totally disagree on the point about tests. The point of tests for
something like this is to confirm that the results of the sample plots are
the same for many different versions of numpy and matplotlib (trust me,
this is not a given – adding tests to corner.py found a few bugs like this
but maybe I'm just a sloppier coder). I see that the reviewer guidelines
say that I'm only allowed to "strongly encourage" you to implement tests so
you can do whatever you want but it doesn't take long to set up travis and
I hope you will consider it!

We have written a test for nearly every keyword argument and included them
in a tests directory under the package directory. We have made the test
function as a standalone script or one can run it using nosetests (nose is
still required in any case).

Otherwise, thanks for discussing and addressing all my comments and I think
that this looks much better. After these minor comments have been
addressed, I think we're good to go.

Thank you!


On Sun, Sep 18, 2016 at 5:54 AM, Dan Foreman-Mackey <
notifications@github.com> wrote:

@arfon https://github.com/arfon: is it possible to update the software
version number and rebuild the paper?

@SebastianBocquet https://github.com/SebastianBocquet: thanks for the
responses – it's looking much better! I have a few minor comments remaining:

The comment about "sigmas" has nothing to do with Gaussians. It has to
do with the relationship between the contours that you draw in 2D and the
quantiles that people tend to label in 1D. If you use 68% as a quantile in
1D then you shouldn't use the same in 2D because the contours will look
broader than they should!! I agree that there's no reason to choose a
specific number and encouraging people to say what the used is the best
practice but I'd be careful about the default value that you choose. I'd be
willing to make a large bet that when the vast majority of readers look at
plots like this they interpret the levels as 1 and 2 "sigma" even if you
say "68% contours" and they would be wrong. Also, "confidence" interval
won't, in general, be the correct term – if this is meant to display the
results of MCMC then these are actually credible regions
https://en.wikipedia.org/wiki/Credible_interval, for example. With
both of these points in mind, I'd update the terminology ("confidence
level" -> "contour level", "Gaussian" -> None) in the discussion and in the
docstring. (Sorry to harp on this so much but I've spent a lot of time
thinking about this and correcting people's misconceptions about these
types of plots over the years!)
2.

It's not actually true that adding version numbers to the setup.py
will not force upgrades (and it's actually generally considered bad
practice to put version numbers in the setup.py
https://caremad.io/posts/2013/07/setup-vs-requirement/). If tests
were included then it would be obvious which versions of the packages were
actually required 😉
3.

I totally disagree on the point about tests. The point of tests for
something like this is to confirm that the results of the sample plots are
the same for many different versions of numpy and matplotlib (trust me,
this is not a given – adding tests to corner.py found a few bugs like this
but maybe I'm just a sloppier coder). I see that the reviewer guidelines
say that I'm only allowed to "strongly encourage" you to implement tests so
you can do whatever you want but it doesn't take long to set up travis and
I hope you will consider it!

Otherwise, thanks for discussing and addressing all my comments and I
think that this looks much better. After these minor comments have been
addressed, I think we're good to go.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#46 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AI56fh0sDDjbLhgyPugz0gxESdEy6pIQks5qrRhngaJpZM4JW3nE
.

@dfm
Copy link

dfm commented Oct 3, 2016

Awesome work! This is getting really close.

I installed a clean Python 2.7 (I tried making a few small changes to support Python 3 but it might take some work!) conda environment with numpy, matplotlib, pandas, and nose. I also removed my .matplotlib directory so there aren't any rcParams set. With this installation, I ran the tests with the master branch of pyGTC and got this list of failures. I don't have scipy installed (this is a conda issue) so that error is okay (you should probably actually catch it and make it a pass if scipy isn't installed) but the others are unfortunate. I actually find that none of the figures are qualitatively similar and since I don't have any rcParams set, I would argue that this isn't user error.

Compare bare.png:

bare

to bare-expected.png

bare-expected

@SebastianBocquet
Copy link

Thank you very much for finding this issue! This was due to your test
environment not having scipy installed, which in turn was not correctly
handled by pygtc (it is, in fact correctly handled for the smoothing, but
it was not for drawing Gaussian prior curves). We adapted the tests
accordingly and updated the docs ("running tests" in installation section").
Thank you,
Sebastian

On Mon, Oct 3, 2016 at 12:14 PM, Dan Foreman-Mackey <
notifications@github.com> wrote:

Awesome work! This is getting really close.

I installed a clean Python 2.7 (I tried making a few small changes to
support Python 3 but it might take some work!) conda environment with
numpy, matplotlib, pandas, and nose. I also removed my .matplotlib
directory so there aren't any rcParams set. With this installation, I ran
the tests with the master branch of pyGTC and got this list of failures
https://gist.github.com/dfm/d5d2097332439512bb897f36501746aa. I don't
have scipy installed (this is a conda issue) so that error is okay (you
should probably actually catch it and make it a pass if scipy isn't
installed) but the others are unfortunate. I actually find that none of the
figures are qualitatively similar and since I don't have any rcParams
set, I would argue that this isn't user error.

Compare bare.png:

[image: bare]
https://cloud.githubusercontent.com/assets/350282/19046360/d7770214-8951-11e6-80f7-8ce9ac8b367a.png

to bare-expected.png

[image: bare-expected]
https://cloud.githubusercontent.com/assets/350282/19046367/ddb0482a-8951-11e6-9c38-a6c7534c6adf.png


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#46 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AI56flJQjnfYKYYrkcgsfjD8mmza6XR7ks5qwTfjgaJpZM4JW3nE
.

@dfm
Copy link

dfm commented Oct 4, 2016

Nope! That isn't the problem. That was an error that should be caught (as I said above) but this doesn't address the actual problem that I brought up.

@SebastianBocquet
Copy link

Your bare.png is not clearly not being smoothed, as opposed to
bare-expected.png, which is due to scipy not being installed, right? I
realize my answer was not very clear. Have you pulled the latest version of
pygtc? This really should fix the problem!

On Tue, Oct 4, 2016 at 1:53 PM, Dan Foreman-Mackey <notifications@github.com

wrote:

Nope! That isn't the problem. That was an error that should be caught (as
I said above) but this doesn't address the actual problem that I brought up.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#46 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AI56flJh2VmI_QDLUj5Jmw9aAFLk5dsJks5qwqC3gaJpZM4JW3nE
.

@dfm
Copy link

dfm commented Oct 4, 2016

Oops. Sorry, I misread your comment. Looks good now. My last comment is that the tests don't close the figures. I'd change:

pygtc.plotGTC(chains=[SAMPLES_1,SAMPLES_2], smoothingKernel = 0)

to:

fig = pygtc.plotGTC(chains=[SAMPLES_1,SAMPLES_2], smoothingKernel=0)
plt.close(fig)

Otherwise this looks good. The only thing that I can't sign off on is the version number. I'm not sure if you need to do something or if @arfon does.

@arfon
Copy link
Member

arfon commented Oct 4, 2016

⚡ thanks @dfm.

@SebastianBocquet if you can make the final changes suggested then I think we're good to accept.

I'll need you make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive. You might want to bump the release number to reflect the changes that have been made based on this review.

@SebastianBocquet
Copy link

The test decorator handles cleaning up the figures. In fact, when trying
your suggestions, nosetests skips the tests.
Please confirm that we're still good to go, and I'll create a release for
the accepted version.

Thanks,
Sebastian

On Tue, Oct 4, 2016 at 5:35 PM, Dan Foreman-Mackey <notifications@github.com

wrote:

Oops. Sorry, I misread your comment. Looks good now. My last comment is
that the tests don't close the figures. I'd change:

pygtc.plotGTC(chains=[SAMPLES_1,SAMPLES_2], smoothingKernel = 0)

to:

fig = pygtc.plotGTC(chains=[SAMPLES_1,SAMPLES_2], smoothingKernel=0)
plt.close(fig)

Otherwise this looks good. The only thing that I can't sign off on is the
version number. I'm not sure if you need to do something or if @arfon
https://github.com/arfon does.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#46 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AI56fqAw91yLZmC6usNzPuU5XZW0FqQWks5qwtS-gaJpZM4JW3nE
.

@dfm
Copy link

dfm commented Oct 6, 2016

Cool. I got a warning that too many figures were open when I ran the tests in Py3 but that must be because the figures don't get closed if the test fails – seems counter intuitive but that is all I can think of.

Looks good to me!

@arfon
Copy link
Member

arfon commented Oct 6, 2016

@dfm - can you confirm we're good to accept at this point?

@SebastianBocquet if @dfm confirms, the next step is to make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission.

@dfm
Copy link

dfm commented Oct 6, 2016

👍 good to go!

@SebastianBocquet
Copy link

And here it is: DOI: 10.5281/zenodo.159225
https://doi.org/10.5281/zenodo.159225

Thanks,
Sebastian

On Wed, Oct 5, 2016 at 9:39 PM, Dan Foreman-Mackey <notifications@github.com

wrote:

👍 good to go!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#46 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AI56ftmw9Q5KJt7tqDkcapZuogWq0kKiks5qxF98gaJpZM4JW3nE
.

@arfon arfon added the accepted label Oct 8, 2016
@arfon
Copy link
Member

arfon commented Oct 8, 2016

@dfm - many thanks for the thorough review.

@SebastianBocquet - your paper is now accepted into JOSS and your DOI is http://dx.doi.org/10.21105/joss.00046 . 🎉 🚀 💥

@arfon arfon closed this as completed Oct 8, 2016
@whedon whedon added published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. labels Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

6 participants