New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review Request: C. de la Torre-Ortiz and A. Nioche #10
Comments
Happy to take this on. @bobaseb are you interested in reviewing this? 😄 |
Sure I'm in :) |
👋 @anne-urai would you be able to review this? |
I don't really have expertise in neural networks, but my colleague @jamesproach is, and is happy to do this. He'll register as a reviewer first. |
@anne-urai thank you! @jamesproach let me know when you are ready and if you have any questions. Same goes for you @bobaseb of course. The sooner the better of course, but let us know when you plan to get this done. 😄 |
The authors do a good job in transforming the original model based on individual neurons to a population model. New parameters are introduced to this effect and the original model's dynamics (i.e., retrieving a memory while inhibiting others under periodic inhibition dynamics) is broadly reproduced. The code is clear and human-readable with enough comments to get the gist of each computation. I was able to run it with ease on a linux machine. Three specific questions came to mind when reviewing this work.
Since the original model has been modified and only the first figure of the original paper seems to have been replicated, perhaps this is best suited as a partial replication. |
@jamesproach what's a realistic ETA for you to take a look here? Thanks. 😄 |
The authors reproduce simulation results from Recanatesi et al using a model of associative memory retrieval which accounts for the sequential activation of memory though two network features: (1) synaptic connections which enforce sequential associations between particular memories and (2) periodic inhibition. This is an extension of the Hopfield Network model of auto-associative memory. In their reproduction, the authors employ a dimensionality reduction proposed in the original article which groups units that present the same activity across all memories reducing the size of the dynamical system by several orders of magnitude. The authors were able to reproduce the simulation results from the original article with slight modifications to the parameters. The associated Python code is clearly written, well commented and fully reproduces the figures in the manuscript. Major Comment: Minor Comments: Could the authors provide a reproduction of the simulation results in figures 3-6. |
@oliviaguest @c-torre Gentle reminder |
Dear all, |
Dear all, |
Thank you for the update! Sounds like you are making huge progress. 👍 |
Any progress ? |
Dear all, According to the reviews, we reproduced the all the figures of the initial paper. Although the overall behavior of the network is similar to the initial network (figure 1), we cannot replicate many of the other results (figures 3 to 6). At this point, our conclusion would be that the original article is not reproducible. However, while working on the replication, we began a discussion by mail with one author of the original article. He has been able to provide scripts from early versions of the code (in a proprietary language). It is very likely that the quality of this replication would benefit from this newfound help, but further investigation is still required. We therefore thank in advance the editorial board and the reviewers for their patience. Best regards, |
@rougier this seems to be a reason to rewrite and re-review this work? What's our policy on this type of situation? |
I think we have a section on non-reproducibility where it is written to try to contact the original author to try to see where's the problem. I guess we can now wait for the new input to check if @c-torre will be able to replicate the work based on the new input. We can leave this thread open until then. |
Gentle reminder |
Dear everyone, In this time, we addressed the reviewer comments, including the change in one parameter value. We then reimplemented the model making new adjustments to equations and parameters, as we were able to obtain figure 2 again. We contacted the original author, who provided apparently a similar implementation to what they used for their article. On the other hand, we are now able to provide a lot more details on the theory and implementation. We are wondering how we should proceed in this case. |
@c-torre Can you please provide a bit more info regarding what has changed and why? I don't blame you in any way, I just would like to know more and your opinion. |
The complete timeline occured as follows: I launched the longer simulations to get results for the last figures, but again could not reproduce results. At this point I got back to contact the first author of the original article. I could not get a reply even after reminding. I'm wondering if the current situation may have an effect in any way, but unfortunately it is all guessing just by now. It does not seem to be any other changes that would explain why we could not get back to them again. |
@c-torre Very useful. Thank you. Have you emailed the corresponding author again? Or just once at the start? |
PS: @c-torre I also forgot to ask... did you email any of the other authors? Original article has four in total. |
I emailed the corresponding author only at the beginning, as I was initially told by the first author he might not reply to emails often in general. I did not contact other authors apart from first and corresponding (last). |
@rougier @khinsen @benoit-girard what do we do/think in this case? Is there a value in asking for input from (any of) the original authors? |
Hi all, I hope everything is well. We would like to update on the state of the replication, especially since many things happened since we contacted all authors. We got replies and clarifications for a few questions we had. They spotted and confirmed that several typos were introduced into the equations that made the replication impossible. The first author got a way back into the cluster where the original files were and found that the parameters in the cluster simulations were different than the ones in the previously provided script, and also than those reported in the paper. As of right now, it seems that the first author could finally fully simulate all the figures, which should complete the replication work. The first and last authors expressed their willingness to publish a correction on the original article, especially when this replication paper is published so they can provide a direct link to an implementation, and credit us for helping in the corrections. We are working on the simulations now and will be back with hopefully good news |
@c-torre: Thank you so much for the update and well done on all the work in figuring this out! I am very pleased to have pushed (lightly) towards you contacting them — this is highly productive for the journal/this publication... and I hope also has been useful for you and your work as well as, of course, science generally! Great work and I am excited to see the next steps. 👏 |
Dear all, We are thankful for the reviewer's comments as they have drastically improved the quality of our work. We also thank all the discussion, as it led the replication work in the appropriate direction. We would encourage to revisit both the manuscript and code, as very major changes have taken place. Here we address the review comments: Review 1
A typo was found and corrected the parameter to its correct value of 0.1
We followed the approach of the original interpretation now, were recalls are consistently observed at every inhibition minima, where rates always go above this threshold value. In the implementation this means that recalls are calculated not based on a threshold, but on the time iteration at which the peak maximum occurs.
We have corrected this major issue and now provide missing and extra figures. Review 2
We have been collaborating with the original authors as we discovered errors in the original article. Normalization of parameters was indeed one of the main issues with the original paper. We have addressed this in detail in the new manuscript, as it is the core of the major changes since our original review request.
A new section has been added with the new figures, which address more in detail changes to the seed, as 10000 networks with different seeds are simulated to produce the recall analysis figures and conclusions. We agree that Figure 2 can be seed or parameter-dependent, and that many recalls in a row are possible with seed changes as seen in later figures. In the time you may take to have a look at our replies, we intend to continue collaborating in the correction of the original article, as the original authors expressed wishes to continue working together on this issue and hope encourage future citations to also include our work. Thank you for all your patience and help, |
@oliviaguest Gentle reminder |
@jamesproach and @bobaseb would you be able to give feedback (e.g., if your comments, questions, have been addressed) on the above, please? |
Yes, my comments have been addressed. I'm happy the replication worked out in the end after contacting the original authors and correcting for errors in the original article (all stated in the current replication). |
@c-torre OK, amazing! Can you please update the metadata.md file with the details of each reviewer, editor, etc., please? Acceptance date is today and also see: ReScience/ReScience#48 (comment)! 🥳 |
Hi all, This is very good news. However, we went an extra step to ensure that everything is absolutely correct this time. Thanks to the original authors we have spotted the following mistakes in our current submission:
While the replication is in a very good place now, as opposed to out first submission, I believe it's in the community's best interest to address these issues before proceeding to final publication. |
Sure, @c-torre! I'd assumed you had fixed these. Let me know when it's good to go. |
Hi! Manuscript is ready and the code has a Zenodo DOI now. What's next? |
We'll publish the paper next week most probably. Don't hesitate to remind us. |
Hi! No worries, thanks for letting me know. Take care and proceed when you're feeling better. There's no rush on our side. All the best. |
@oliviaguest I can publish it for you, just tell me. |
If you can that would be amazing — thank you, @rougier! |
@oliviaguest no prob. |
Added article source with metadata.yaml at c-torre/replication-recanatesi-2015-article |
Perfect, thank. Can you update the template with the latest template (and check it does not break things). This would be necessary to include a software heritage id for your code. This is now the default way of ensuring the coe will be available in the future. To have your code on software heritage is really simple, just check https://www.softwareheritage.org/save-and-reference-research-software/ |
Updated template, corrected latex errors and added code to software heritage (c-torre/replication-recanatesi-2015-article) |
Thanks, I'll try to publish it today. For the swh link, can you send the swid (instead of the url) ? You should have received it when you saved the repo. The article template will complete the full url based on this id. |
Here is the sanboxed version (not the real one): https://sandbox.zenodo.org/record/719803 |
Sandboxed version looks good. For the SWID, the only thing I seem to find is a "permalink" hidden in a SWH side bar:
Is it any of these? |
First one ( |
Your article is published at https://zenodo.org/record/4461767. It will appear soon on the ReScience website as well. |
@oliviaguest Feel free to close the issue. |
Thank you all for this! Nice work. 😊 |
Dear ReScience editors,
I request a review for the following replication:
Original article: https://doi.org/10.3389/fncom.2015.00149
PDF URL: https://github.com/c-torre/replication-recanatesi-2015/blob/master/re-neural-network-model-of-memory-retrieval.pdf
Metadata URL: https://github.com/c-torre/replication-recanatesi-2015/blob/master/metadata.tex
Code URL: https://github.com/c-torre/replication-recanatesi-2015/
Scientific domain: Computational Neuroscience
Programming language: Python
Suggested editor: @oliviaguest
The text was updated successfully, but these errors were encountered: