New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Multiblock PLS: Block dependent prediction modeling for Python #1190

Closed
whedon opened this Issue Jan 21, 2019 · 44 comments

Comments

Projects
None yet
5 participants
@whedon
Copy link
Collaborator

whedon commented Jan 21, 2019

Submitting author: @lvermue (Laurent Vermue)
Repository: https://github.com/DTUComputeStatisticsAndDataAnalysis/MBPLS
Version: v1.0.0
Editor: @brainstorm
Reviewer: @arokem
Archive: 10.5281/zenodo.2560303

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c"><img src="http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c/status.svg)](http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@arokem, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @brainstorm know.

Please try and complete your review in the next two weeks

Review checklist for @arokem

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v1.0.0)?
  • Authorship: Has the submitting author (@lvermue) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Jan 21, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @arokem it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐️ Important ⭐️

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Jan 21, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Jan 21, 2019

@arokem

This comment has been minimized.

Copy link
Collaborator

arokem commented Jan 22, 2019

@lvermue: first of all, kudos on a nice and useful software package!

Could you please share the code that you used to generate the results in the "Benchmark" section of your paper? Thanks!

@arokem

This comment has been minimized.

Copy link
Collaborator

arokem commented Jan 22, 2019

I have looked over all of the review criteria (also shown above) and posted issues to your repository marked with "[JOSS review]". I also added some other issues, that are not required for the review of your paper, but I still think would be good additions or changes 😄

@lvermue

This comment has been minimized.

Copy link

lvermue commented Jan 31, 2019

@arokem Thank you for your comments and useful suggestions!🙂
We have worked on all issues and feel confident that we have resolved them.

Regarding your previous comment:

Could you please share the code that you used to generate the results in the "Benchmark" section of your paper? Thanks!

The zip-file attached to this comment contains three different python scripts, e.g. for testing run-times for fixed row size with increasing column size, fixed column size with increasing row size as well as a symmetric increase of both row and column sizes as reported in the paper.

The scripts also contain the tests for the Ade4-package, which was run through rpy2. We also performed various tests with pure R scripts, but could not detect any difference in performance nor in the times recorded.

The redis redlock module was used to let 4 identical servers coordinate and work on the test script simultaneously, i.e. each server knows what the others have done and are doing at the moment, so the server can pick a remaining task not handled by other servers.🤓
runtime_analysis.zip

@arokem

This comment has been minimized.

Copy link
Collaborator

arokem commented Feb 1, 2019

Hi @lvermue : nice work on these revisions. Most of my comments are addressed.

Regarding the performance benchmark: any reason not to add these to the repo of your software? Or, barring that, put that on another publicly available software that you could refer to in your manuscript? That would improve the reproducibility of these benchmark results (I am already running into some issues running the scripts...)

@lvermue

This comment has been minimized.

Copy link

lvermue commented Feb 3, 2019

Hi @arokem
The initial reason not to add the runtime analysis scripts to our repo was that they were not considered as a part of the software itself. However, as they are a part of the paper I agree with you and have now added them as a subdirectory called 'benchmark' within the paper directory, including a small README showing the requirements to run the tests. Furthermore, the scripts were cleaned up and changed to run on one node only, not requiring the Redis lock setup for distributed calculations.
See DTUComputeStatisticsAndDataAnalysis/MBPLS@13007ac

@arokem

This comment has been minimized.

Copy link
Collaborator

arokem commented Feb 6, 2019

@brainstorm: I am ready to check that last box, and from my point of view this paper is ready to be accepted for publication.

@lvermue : Thanks for adding the code. I think that it benefits the paper greatly.

One small thing: I don't think that this generates the plots that appear in the paper. I realize that this is just one small additional step, but I think that it would be helpful for readers to see how you would get from your benchmark code to these plots. But take that as a recommendation, rather than as a requirement for my review of the paper.

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 6, 2019

Thanks @arokem! I would also like to see those plots in the paper itself :)

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 6, 2019

@whedon check references

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 6, 2019

Attempting to check references...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 6, 2019


OK DOIs

- http://doi.org/10.1002/cem.1180090604 is OK
- http://doi.org/10.1002/cem.1180080204 is OK
- http://doi.org/10.1093/bib/bbw113 is OK
- http://doi.org/10.1016/0898-5529(89)90004-3 is OK
- http://doi.org/10.1016/0169-7439(93)85002-X is OK
- http://doi.org/10.18637/jss.v086.i01 is OK
- http://doi.org/10.1137/0905052 is OK
- http://doi.org/10.1002/cem.1180070104 is OK

MISSING DOIs

- https://doi.org/10.3389/fninf.2014.00014 may be missing for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S is INVALID
@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 6, 2019

@lvermue Please correct the missing/invalid DOIs.

@lvermue

This comment has been minimized.

Copy link

lvermue commented Feb 6, 2019

@brainstorm @arokem I just updated the runtime analysis scripts. They now contain the code that was used to create the plots shown in the paper. See
DTUComputeStatisticsAndDataAnalysis/MBPLS@abc9bf5

Furthermore, I had a look at the missing/invalid DOIs.
Missing DOI
I could not find any DOI for the paper "Scikit-learn: Machine learning in Python" and the suggested one is definitely wrong. Is there another way to resolve this issue?
Invalid DOI
I have triple checked this DOI and cannot see what should be wrong with it. Any idea on how to solve fix one? :)

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 7, 2019

@lvermue I suspect it has to do with "special characters" in the URL, @arfon, did you encounter those errors in the whedon DOI checker parser before?

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 7, 2019

Scikit-learn: Machine learning in Python

@lvermue - the checks done by @whedon should be considered suggestions not definite errors so please ignore this one.

http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S

Yes, this looks like a bug in the DOI checking code. I'll fix this now but as you say @lvermue - http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S is fine.

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 7, 2019

@whedon check references

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 7, 2019

Attempting to check references...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 7, 2019


OK DOIs

- http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S is OK
- http://doi.org/10.1002/cem.1180090604 is OK
- http://doi.org/10.1002/cem.1180080204 is OK
- http://doi.org/10.1093/bib/bbw113 is OK
- http://doi.org/10.1016/0898-5529(89)90004-3 is OK
- http://doi.org/10.1016/0169-7439(93)85002-X is OK
- http://doi.org/10.18637/jss.v086.i01 is OK
- http://doi.org/10.1137/0905052 is OK
- http://doi.org/10.1002/cem.1180070104 is OK

MISSING DOIs

- https://doi.org/10.3389/fninf.2014.00014 may be missing for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- None
@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 7, 2019

OK, @whedon now recognizes http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S as a valid DOI.

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 8, 2019

Thanks @arfon, it should be good to accept/deposit afaict, thanks for the revisions on the paper @lvermue and the reviewing efforts @arokem!

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 8, 2019

@lvermue Can you please provide a zenodo DOI so we can save it as archive?

@lvermue

This comment has been minimized.

Copy link

lvermue commented Feb 8, 2019

@brainstorm @arokem Thank you for the swift and constructive review process!
@arfon Thanks for the super fast fix!

Here the zenodo DOI:
https://doi.org/10.5281/zenodo.2560303

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 9, 2019

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

OK. 10.5281/zenodo.2560303 is the archive.

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 9, 2019

@whedon accept

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

Attempting dry run of processing paper acceptance...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

PDF failed to compile for issue #1190 with the following error:

/app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in block in find': No such file or directory - tmp/1190 (Errno::ENOENT) from /app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in collect!'
from /app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in find' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-dc9ad3c41cc6/lib/whedon/processor.rb:57:in find_paper_paths'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-dc9ad3c41cc6/bin/whedon:70:in compile' from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/command.rb:27:in run'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in invoke_command' from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor.rb:387:in dispatch'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/base.rb:466:in start' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-dc9ad3c41cc6/bin/whedon:113:in <top (required)>'
from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in load' from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in

'

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019


OK DOIs

- http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S is OK
- http://doi.org/10.1002/cem.1180090604 is OK
- http://doi.org/10.1002/cem.1180080204 is OK
- http://doi.org/10.1093/bib/bbw113 is OK
- http://doi.org/10.1016/0898-5529(89)90004-3 is OK
- http://doi.org/10.1016/0169-7439(93)85002-X is OK
- http://doi.org/10.18637/jss.v086.i01 is OK
- http://doi.org/10.1137/0905052 is OK
- http://doi.org/10.1002/cem.1180070104 is OK

MISSING DOIs

- https://doi.org/10.3389/fninf.2014.00014 may be missing for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- None
@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 9, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 9, 2019

@whedon accept

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

Attempting dry run of processing paper acceptance...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019


OK DOIs

- http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S is OK
- http://doi.org/10.1002/cem.1180090604 is OK
- http://doi.org/10.1002/cem.1180080204 is OK
- http://doi.org/10.1093/bib/bbw113 is OK
- http://doi.org/10.1016/0898-5529(89)90004-3 is OK
- http://doi.org/10.1016/0169-7439(93)85002-X is OK
- http://doi.org/10.18637/jss.v086.i01 is OK
- http://doi.org/10.1137/0905052 is OK
- http://doi.org/10.1002/cem.1180070104 is OK

MISSING DOIs

- https://doi.org/10.3389/fninf.2014.00014 may be missing for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- None
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 9, 2019

Check final proof 👉 openjournals/joss-papers#479

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#479, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true
@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 9, 2019

@arokem @brainstorm - do these look OK to you ☝️?

@brainstorm

This comment has been minimized.

Copy link
Member

brainstorm commented Feb 10, 2019

All good on my side, ready to be deposited! @arokem gave the thumbs up earlier in the thread so I assume it's all good too from the reviewers' side too.

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 10, 2019

@whedon accept deposit=true

@whedon whedon added the accepted label Feb 10, 2019

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 10, 2019

Doing it live! Attempting automated processing of paper acceptance...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 10, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 openjournals/joss-papers#480
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01190
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@arfon

This comment has been minimized.

Copy link
Member

arfon commented Feb 10, 2019

@arokem - many thanks for your review and to @brainstorm for editing this submission

@lvermue - your paper is now accepted into JOSS ⚡️🚀💥

@arfon arfon closed this Feb 10, 2019

@whedon

This comment has been minimized.

Copy link
Collaborator Author

whedon commented Feb 10, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](http://joss.theoj.org/papers/10.21105/joss.01190/status.svg)](https://doi.org/10.21105/joss.01190)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01190">
  <img src="http://joss.theoj.org/papers/10.21105/joss.01190/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.01190/status.svg
   :target: https://doi.org/10.21105/joss.01190

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment