Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review sections #1

Closed
adamhsparks opened this issue Sep 6, 2016 · 4 comments
Closed

Review sections #1

adamhsparks opened this issue Sep 6, 2016 · 4 comments

Comments

@adamhsparks
Copy link
Member

I've made a nascent attempt at outlining the document.

Are the sections I've proposed acceptable or do we need more or less?

@adamhsparks
Copy link
Member Author

I'm assuming everyone is OK with the outline I provided some time ago.

From the e-mail exchanges it seemed so.

@zachary-foster
Copy link
Collaborator

Hi @adamhsparks, sorry for the inactivity on my part. There has been a lot going on lately.

Is the outline you are referring to here: https://github.com/adamhsparks/Reproducible-Research-in-Plant-Pathology/blob/master/What%20Does%20Reproducible%20Research%20Mean%20for%20Plant%20Pathology.Rmd?

If so, I think this is a good structure. Basically, I think we should define a set of criteria for what we call "reproducible research" and the see what proportion articles fall into that category. That seems to agree with your outline.

What do you think about splitting up "reproducible research" into categories? For example:

  • Field/lab Methods:
    • Sample metadata (location, dates, weather, etc)
    • unambiguous instructions
    • curated cultures if applicable
    • publicly accessible (i.e. dont have to email anyone, have an account anywhere, or pay anything)
  • Raw data accessibility:
    • online
    • publicly accessible (i.e. dont have to email anyone, have an account anywhere, or pay anything)
    • well annotated so its understandable independent of the methods/paper
  • Computational methods:
    • publicly accessible (i.e. dont have to email anyone, have an account anywhere, or pay anything)
    • Using open source, free software
    • All scripted; no manual editing or point and click
    • version controlled from start of project
    • well annotated so its understandable independent of the methods/paper

This is just what I came up with off the top of my head; Im sure this can be refined and expanded. I am thinking some list of criteria like this could be used to evaluate each paper. We could make a bar-chart with the proportion of papers that pass each criteria.

@adamhsparks
Copy link
Member Author

This is good feedback, @zachary-foster. I agree with your start here, it's something that we need, the metrics with which to measure the reproducibility.

I'll think on it some more too, but this is a great start.

@adamhsparks
Copy link
Member Author

Closing this as I think we're pretty well settled here.

See: https://github.com/phytopathology/Reproducible.Plant.Pathology/blob/master/vignettes/reproducibility_criteria.Rmd for the agreed on methods of measuring reproducibility

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants