Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Artifact completeness and size #1

Open
jon-bell opened this issue Nov 24, 2019 · 4 comments
Open

Artifact completeness and size #1

jon-bell opened this issue Nov 24, 2019 · 4 comments
Labels
policy An issue about refining a policy

Comments

@jon-bell
Copy link
Collaborator

jon-bell commented Nov 24, 2019

Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?

Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?

What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.

Whatever the criteria for "too big" is - what process should authors follow if their artifact is too big to submit some subset of their artifact for evaluation?

@jon-bell jon-bell added the policy An issue about refining a policy label Nov 24, 2019
@jon-bell jon-bell changed the title Artifact completeness Artifact completeness and size Nov 24, 2019
@sbaltes
Copy link

sbaltes commented Nov 30, 2019

In my opinion, if authors do not submit the complete dataset, they should be required to justify that decision. For very large datasets, they can always upload them to archive.org and additionally provide a subset that allows reviewers to test provided scripts. We used the following formation in the MSR 2019 Mining Challenge CfP:

Already upon submission, authors can privately share their anonymized data and software on preserved archives such as Zenodo or Figshare (tutorial available here). Zenodo accepts up to 50GB per dataset (more upon request). There is no need to use Dropbox or Google Drive. After acceptance, data and software should be made public so that they receive a DOI and become citable. Zenodo and Figshare accounts can easily be linked with GitHub repositories to automatically archive software releases. In the unlikely case that authors need to upload terabytes of data, Archive.org may be used.

@dgraziotin
Copy link
Member

@sbaltes
Copy link

sbaltes commented Nov 30, 2019

Thanks Daniel, forgot to copy the link.

@gousiosg
Copy link

gousiosg commented Dec 1, 2019

Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?

By default yes, unless it is not allowed due to other issues (e.g. IP). The burden should be on the authors to justify why some parts of the artefact where not made public.

Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?

All claims in a paper must be supported by the artefact. The authors must be responsible to either ensure this or explain why they cannot. There is no need for someone to decide.

What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.

We could have a round 2^32 bytes maximum artefact size :-) Seriously, that should be up to the author.

Whatever the criteria for "too big" is - what process should authors follow if their artefact is too big to submit some subset of their artefact for evaluation?

A representative sample must be extracted from the full dataset. The statistical techniques must be documented by the authors and be subject to evaluation during artefact review. The tools that compose the artefact must be able to work with the sample and produce similar results, within ranges that the authors must describe and explain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
policy An issue about refining a policy
Projects
None yet
Development

No branches or pull requests

4 participants