Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Submission for Issue #79 #150

Merged

Conversation

ArnoutDevos
Copy link
Contributor

@ArnoutDevos ArnoutDevos commented Jan 7, 2019

#79

@reproducibility-org reproducibility-org changed the title #79 Submission for Issue #79 Jan 7, 2019
@reproducibility-org reproducibility-org added the checks-complete Submission criteria checks complete label Jan 7, 2019
@ArnoutDevos
Copy link
Contributor Author

@reproducibility-org complete

@koustuvsinha koustuvsinha added reviewer-assigned Reviewer has been assigned and removed reviewer-assigned Reviewer has been assigned labels Feb 1, 2019
@reproducibility-org
Copy link
Collaborator

Hi, please find below a review submitted by one of the reviewers:

Score: 8
Reviewer 1 comment : This works tries to reproduce meta-learning with MAML and "differentiable close-form solvers". The results are matched to the original paper. Hyper-parameters are well-searched. Though, the writing of this report can be improved and more ablation study can be include.
Confidence : 3

@reproducibility-org
Copy link
Collaborator

Hi, please find below a review submitted by one of the reviewers:

Score: 8
Reviewer 2 comment : This report makes an effort to reproduce the main results in the paper by Bertinetto et al. (2019).

In the absence of any open source code release, the authors implemented code to reproduce its result starting from original paper and a previous implementation by Finn (2018), and share their code publicly on github. The submission also specifies what libraries and versions were used in their code, as well as the type of hardware on which the experiments were run.

This reproducibility work clearly states which experiments and scenarios they aimed to reproduce, what procedure they followed to implement the necessary code, and what assumptions and decisions they had to make to get around the lack of implementation details provided in the original manuscript, thus directly pointing at a shortcoming in the paper they were analyzing. The availability of hyperparameter values chosen by the authors of this work provides a more solid baseline for future reproducibility efforts in this domain and provides a concrete suggestion to Bertinetto et al. on how to improve the impact and extensibility of their work.
In fact, the authors of this reproducibility analysis shared their findings with the original authors using OpenReview, and the original authors were able to improve their contribution by addressing some of the issues raised in this reproducibility report.

Paragraph 4 includes a thoughtful discussion on the repercussions on reproducibility and fairness of comparison with prior literature of the choice varying the number of classes at training time, which points to the care and attention to detail employed by the authors in this work.

While the statement on the vagueness of the stopping criterion chosen in the work by Bertinetto et al. (2019) is valid, the choice made in this report certainly does not seem to match the original description.

The introduction paragraph could use more citations to prior work.

A long (perhaps excessive?) background paragraph provides a pedagogical introduction to the architecture introduced by Bertinetto et al. (2019) and to the broader field of meta-learning. Although this is helpful to assess the level of familiarity of the author with the subject, it may not be relevant and appropriate for a reproducibility analysis paper.

The language used in the paper is at times too colloquial.

Sufficient experimentation and an attempt at discussing the observed results are present in this report. More in depth work to determine the compatibility of these results with the original ones, including better estimation of possible systematic deviations due to hyperparameter choices would be beneficial.
Confidence : 4

@reproducibility-org
Copy link
Collaborator

Hi, please find below a review submitted by one of the reviewers:

Score: 9
Reviewer 3 comment : TA Review

  • The report is very well-written with enough background details on meta-learning.
  • The authors explain the meta-learning model suggested and present the algorithm in their own words.
  • The document focuses only on the R2D2 model and the results seem reproducible.
  • As a baseline, MAML algorithm, based on backpropagation, is also considered.
  • If I understand correctly, the authors formulate the problem as a regression task instead of classification, with scaling parameters for the output. Apart from noting that it improves performance, I don't see a discussion on how it helps. Including this would be really helpful.
  • There seems to be a small but significant difference between your implementation and the paper's results. The argument that it might be due to the assumptions which is believable. Did you try any other stopping criteria, and if so, what were the results?

NB : This TA review has been provided by the institution directly and authors have communicated with the reviewers regarding changes / updates.
Confidence : 4

@reproducibility-org reproducibility-org added the review-complete Review is done by all reviewers label Mar 22, 2019
@reproducibility-org
Copy link
Collaborator

Meta Reviewer Decision: Accept

@reproducibility-org reproducibility-org added the accept Meta Reviewer decision: Accept label Mar 31, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accept Meta Reviewer decision: Accept checks-complete Submission criteria checks complete review-complete Review is done by all reviewers reviewer-assigned Reviewer has been assigned
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants