Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: DscoreApp: An user-friendly web application for computing the Implicit Association Test D-score #1764

Closed
38 tasks done
whedon opened this issue Sep 25, 2019 · 58 comments
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review rOpenSci Submissions associated with rOpenSci

Comments

@whedon
Copy link

whedon commented Sep 25, 2019

Submitting author: @OttaviaE (Ottavia M. Epifania)
Repository: https://github.com/OttaviaE/DscoreApp
Version: v0.1
Editor: @alexhanna
Reviewer: @benmarwick, @tomfaulkenberry
Archive: 10.5281/zenodo.3523063

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/e1213ac43594c71be9475cfc29ff50e7"><img src="https://joss.theoj.org/papers/e1213ac43594c71be9475cfc29ff50e7/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/e1213ac43594c71be9475cfc29ff50e7/status.svg)](https://joss.theoj.org/papers/e1213ac43594c71be9475cfc29ff50e7)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@benmarwick & @tomfaulkenberry, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @alexhanna know.

Please try and complete your review in the next two weeks

Review checklist for @benmarwick

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@OttaviaE) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @tomfaulkenberry

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@OttaviaE) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Sep 25, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @benmarwick, @tomfaulkenberry it looks like you're currently assigned to review this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 25, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 25, 2019

@alexhanna
Copy link

👋@benmarwick -- feel free to copy your review over from the last issue into this issue, which contains the review proper.
@tomfaulkenberry -- it'd be wonderful if you could work on (and finish!) the review today.

@tomfaulkenberry
Copy link

I have completed my review of DscoreApp and the associated software paper. DscoreApp is a very nice, fully-featured Shiny app for computing D-scores from Implicit Association Test data. I was able to follow the instructions provided by the authors for the example data.

My main comments are related to the software paper itself, not the code. Thus, instead of opening an issue on the Github repo for DscoreApp, I'll just list them here. These are primarily editorial comments that I think will make the reading of the paper easier.

  1. In paragraph 1, the phrases "flowers images" and "insects images" are used frequently. It may be more clear to say instead "images of flowers" and "images of insects".

  2. Underneath Table 1, I suggest editing "respondents might be given a feedback" to "respondents might be given feedback" (i.e., remove the "a" from the sentence)

  3. For the three-step procedure on computing D-scores: can you explain exactly what standard deviation is being computed? From what is written, I'm not sure whether the standard deviations of each block are being pooled, or whether the trials are all pooled and then standard deviation is computed.

  4. In the section DscoreApp: The figures "D-score results panel" and "Results graphical representations" are not appearing in the paper (but they do in the version of the paper on the Github repo). I'm assuming this is a minor issue caused by some glitch in the paper build, so it should be easy to fix.

Finally, I'll mention that there's no clear instruction in the paper or the Github repo about how to contribute to the project. Given that this is open source software, some indication for how others can go about adding to the project is absolutely necessary (my suggestion -- just give a paragraph on the Github readme about how issues or requests can be opened).

Overall, I am quite happy with this package (actually, I'm very impressed with the coding!). I think it will be a useful app for many people who use IATs. Once my concerns noted above have been addressed, I will enthusiastically endorse publication in JOSS.

@OttaviaE
Copy link

@tomfaulkenberry Thanks for your comments!

1. In paragraph 1, the phrases "flowers images" and "insects images" are used frequently. It may be more clear to say instead "images of flowers" and "images of insects".

Done.

  1. Underneath Table 1, I suggest editing "respondents might be given a feedback" to "respondents might be given feedback" (i.e., remove the "a" from the sentence)

Done.

  1. For the three-step procedure on computing D-scores: can you explain exactly what standard deviation is being computed? From what is written, I'm not sure whether the standard deviations of each block are being pooled, or whether the trials are all pooled and then standard deviation is computed.

I changed the sentence to “This difference is divided by the standard deviation computed on the pooled trials of both blocks.”. I hope it’s clearer that the trials of the blocks are first pooled together, and then standard deviation is computed

  1. In the section DscoreApp: The figures "D-score results panel" and "Results graphical representations" are not appearing in the paper (but they do in the version of the paper on the Github repo). I'm assuming this is a minor issue caused by some glitch in the paper build, so it should be easy to fix.

I think it is a glitch, and last time @kyleniemeyer managed to fix it.

I already addressed the issues highlighted by @benmarwick, but I can copy and paste the reply here.

@tomfaulkenberry
Copy link

Thanks @OttaviaE...as soon as I see the revised manuscript, I'll endorse publication

@kyleniemeyer
Copy link

@OttaviaE it looks like your figures are currently included using HTML commands, but the paper needs to use the Markdown commands instead.

So, for example, replace

<div class="figure">
<img src="results.png" alt="D-score results panel." width="98%" />
<p class="caption">D-score results panel.</p>
</div>

with

![D-score results panel.](results.png)

(but for both figures)

@OttaviaE
Copy link

@kyleniemeyer thank you, I made the changes you suggested and it's working (allegendly)!

@tomfaulkenberry Thanks!

@tomfaulkenberry
Copy link

@whedon commands

@whedon
Copy link
Author

whedon commented Sep 25, 2019

Here are some things you can ask me to do:

# List Whedon's capabilities
@whedon commands

# List of editor GitHub usernames
@whedon list editors

# List of reviewers together with programming language preferences and domain expertise
@whedon list reviewers

EDITORIAL TASKS

# Compile the paper
@whedon generate pdf

# Compile the paper from alternative branch
@whedon generate pdf from branch custom-branch-name

# Ask Whedon to check the references for missing DOIs
@whedon check references

@tomfaulkenberry
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 25, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 25, 2019

@tomfaulkenberry
Copy link

New pdf looks great. My review is complete...please consider this my final endorsement. @OttaviaE, nice work indeed :)

@OttaviaE
Copy link

@tomfaulkenberry thanks again!

@benmarwick
Copy link

My review is here on the pre-review issue: #1600 (comment)

I see responses by @OttaviaE here: https://github.com/OttaviaE/DscoreApp/tree/master/tests, thanks!

However, I don't see the high-level description anywhere on the repo or the app, can you point me to it more specifically? I'm not familiar with how the json files in the tests directory perform tests of the app, so I can't evaluate those (I use testthat). How do we know the tests are passing? Can the app repo be hooked to Travis so we can get a green badge to indicate that all tests are passing? I don't see CONTRIBUTING.md in the repo anyway, perhaps another git push will make it show?

I prefer to see shiny apps as packages because I can more easily identify where the core parts are, and understand the test coverage more readily. Then you can have a regular DESCRIPTION file to hold the version number, license details, and other metadata in a standardised, machine-readable way that most users expect to see.

@OttaviaE
Copy link

OttaviaE commented Sep 26, 2019

@benmarwick thank for your reply.

By "high-level description" you mean like the DESCRIPTION file in the R packages? If so, I don’t know whether a file like that would be useful/feasible for the app.

I added a "Contributing" section to the README.md file of the app, in which I give instructions of how to contribute to the app and how to contact me in case of need or for signaling bugs.

Since a Shiny App is not an R package, you can't use testthat even if you want (I tried). I tested the app by using the shinytest package (https://cran.r-project.org/web/packages/shinytest/index.html). It basically runs the app a first time and save the expected outcomes, then runs a test in which it compares the first outcome with the second one. If there are some differences, it returns an error.

I developed this app with the idea of giving an easy-to-use tool for computing the D-score for the IAT. Indeed, the app is supposed to be used online at the link I posted in the README file, and I have created the GitHub repository so that people can actually check what the code is doing, if they want to do that. Nevertheless, I developed a package for the computation of the IAT D-score as well. this package is available on both GitHub (https://github.com/OttaviaE/implicitMeasures) and CRAN (https://cran.r-project.org/web/packages/implicitMeasures/index.html).

@alexhanna
Copy link

👋 @benmarwick can you reply to the last post by @OttaviaE?

@alexhanna
Copy link

I've checked off things in @benmarwick's review which were checked in issue #1600.

@OttaviaE - there doesn't need to be a DESCRIPTION, but according to JOSS guidelines, the high-level description can be in the README.md file and include:

A statement of need
Installation instructions
Example usage
API documentation

@OttaviaE
Copy link

Hey @alexhanna I've just pushed some changes on GitHub that should addressed the issues in @benmarwick reviews.

Let me know whether it is okay!

@OttaviaE
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Oct 28, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Oct 28, 2019

@benmarwick
Copy link

Thanks, sorry for the delay, on reflection I see I've been reviewing using the rOpenSci package review guidelines, which are not necessary here because the author is not submitting for onboarding with rOpenSci (unlike the last pkg I reviewed for JOSS). So, I'm ok with it, nothing further from me. 👍

@alexhanna
Copy link

👋 @openjournals/joss-eics This one is ready for acceptance. Have a look.

@danielskatz
Copy link

Is v1.0.0 still the correct version? In the repo I see one release marked v0.1

@danielskatz
Copy link

Please update the zenodo metadata so that the title matches the paper title and so that the authors match the paper authors.

@OttaviaE
Copy link

@danielskatz I updated the zenodo metadata according to your instructions. Concerning the version, it should be the v0.1.

@danielskatz
Copy link

In the paper, is "their belonging category" a standard term in this field? If not, it probably should be changed to "the category to which they belong".

I'm also suggesting a bunch of other changes in OttaviaE/DscoreApp#2

In addition, the paper twice says something like CSV using commas to separate values - given that CSV stands for comma-separated values, this seems a little redundant.

@danielskatz
Copy link

@whedon set v0.1 as version

@whedon
Copy link
Author

whedon commented Oct 30, 2019

OK. v0.1 is the version.

@OttaviaE
Copy link

@danielskatz thank you.

In the paper, is "their belonging category" a standard term in this field? If not, it probably should be changed to "the category to which they belong".

I agree with you, and I changed the text accordingly.

In addition, the paper twice says something like CSV using commas to separate values - given that CSV stands for comma-separated values, this seems a little redundant.

It's true, but I decided to stress this factor because the default column separator changes according to the geographical area. For instance, the default column separator in Italy is the semicolon, unless users change it or specify otherwise every time a new file is saved. I think it's worthy to repeat it for a correct use of the app.

I'm also suggesting a bunch of other changes in OttaviaE/DscoreApp#2

I accepted your changes. Nonetheless, I'm not sure about the use of the term "SPSS commands". Indeed, the SPSS scripts one can use for computing the D-score using SPSS are known as syntaxes, and this is the reason why I used this term.

@danielskatz
Copy link

Ok, please go ahead and fix anything that you think should be fixed, and then we'll do the accept processing.

@OttaviaE
Copy link

@danielskatz Done!

@danielskatz
Copy link

@whedon accept

@whedon
Copy link
Author

whedon commented Oct 30, 2019

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Oct 30, 2019


OK DOIs

- 10.1037/0022-3514.85.2.197 is OK
- 10.1037/1089-2699.6.1.101 is OK
- 10.1037/0022-3514.74.6.1464 is OK
- 10.1177/0146167216684131 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Oct 30, 2019

Check final proof 👉 openjournals/joss-papers#1065

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1065, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@danielskatz
Copy link

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Oct 30, 2019

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Oct 30, 2019

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Oct 30, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.01764 joss-papers#1066
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01764
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@danielskatz
Copy link

Thanks to @benmarwick and @tomfaulkenberry for reviewing and @alexhanna for editing!

@danielskatz
Copy link

Once the DOI resolves, I will close this issue, but it isn't yet working for me

@OttaviaE
Copy link

Thanks to you all @benmarwick, @tomfaulkenberry, @alexhanna!

@OttaviaE
Copy link

@danielskatz the DOI is working for me!

@alexhanna
Copy link

Congrats to you, @OttaviaE! Thanks so much @tomfaulkenberry, @benmarwick, and @danielskatz!

@whedon
Copy link
Author

whedon commented Oct 30, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01764/status.svg)](https://doi.org/10.21105/joss.01764)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01764">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01764/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01764/status.svg
   :target: https://doi.org/10.21105/joss.01764

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@arfon arfon added the rOpenSci Submissions associated with rOpenSci label Feb 6, 2020
@whedon whedon added published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. labels Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review rOpenSci Submissions associated with rOpenSci
Projects
None yet
Development

No branches or pull requests

8 participants