Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Review Request: Shifman #22
I request a review for the reproduction of the following paper:
I believe the original results have been faithfully reproduced as explained in the accompanying article.
Thank you for your consideration,
The manuscript is clear and well presented. I would have liked a bit more background in the intro: what is the original study about? And what constitutes a successful replication for a neuron model? Nothing too verbose, just a couple sentences.
The main script ran without issues on my machine (I checked both Python 2.7.2 and 3.5.1), and took about 5 minutes to re-generate all 9 figures. I can confirm that the curves I reproduce match those that were committed by @aaronshifman. I find only slight differences in axis labels; these are probably due to different fonts or my own
In the abstract you link to the cellML model ("here"). I think this should be turned into a reference so that the URL is spelled out in the bibliography.
In the methods you mention setting a maximum timestep because of the step changes in stimulation current. I'm a bit confused because I imagine that the solver would react to the discontinuity by reducing the timestep till it hits the minimum or satisfies the error tolerance. I'm not familiar about that particular solver, so maybe I'm wrong here.
In the results section: "All results from the original paper were faithfully reproduced, however for space concerns...". What you mean to say here, if I understand right, is that the omitted figures aren't hiding anything suspicious. I would rephrase that sentence in light of the discrepancies in fig. 2 and fig. 4 (see more below about that).
In fig. 8 the arrows overlap with the text --- maybe tweak the font size.
The code is tidy, well commented and to the point --- there is no doubt what each function is doing. Thanks to the use of scipy's ODE routines there is little ancillary code that could introduce bugs.
One thing that isn't ideal is the way model parameters are defined multiple times: in
A minor thing,
Results & replication
The spike train and AP waveform (fig. 3) overlap perfectly with the original (fig. 5), with the exception that the current pulse seems to terminate
Now, as for the discrepancy in figure 2, I'm a bit skeptical that it's an error in the axis labels --- how likely is it that these were added by hand?
In fact, I find further discrepancies when I try to overlap figure 2b (modified to plot in colour for easy comparison):
The green and red curves are fine. But the blue curve definitely doesn't match the original and this affects the total I_Ca too. Also, did I_N and I_P get switched around in the legend?
To me this looks like there is a deeper issue here --- either the original authors reported the wrong parameters for that figure, or there is a mistake in the replication, or there is some other discrepancy (eg. numerical methods). You mention that you were in touch with them. Did you discuss this with them?
Is it possible that this discrepancy could explain the slight mismach you report in figure 4?
I agree that given the general agreement on the shape of the action potentials this is unlikely to change the general claim of replication. But I think the disagreement in fig. 2 requires further work: either (a) identifying a new set of parameter that matches the original or (b) discussing the disagreement further in the manuscript, and changing the conclusion to reflect that.
Thank you @damiendr for the very thorough comments. I will wait for the other reviewer before I make any changes, however I will address your timestep question here.
SciPy's ODE solver (that I used) is a wrapper around LSODA, which is a variable time step stiffness switching (switches between adams and BDF) algorithm. So you are correct that it will adjust its time step to match relative and absolute tolerances that I define. However when the derivatives are very small it will be much more generous in its time step. Since the current pulse is only 1ms, and the neuron starts (at least numerically) at rest, and the pulse starts at 5ms (if I recall correctly), by the time the solver reaches 5ms it may have a time step longer than 1ms and miss the pulse completely. The very small max time step does away with that worry.
I could have started that pulse at 0 and appended 5ms of rest behind it, but this seemed simpler to explain and more like a "real experiment"
Also on an unrelated note: how did you overlap the two figures so well?
@aaronshifman thank you for the explanation on the solver and max timestep, now I see the reason why you need a maximum timestep. I think what got me confused is the mention of the discontinuity: this gives the impression that you need a maximum timestep because the current pulse is a square wave. But if I understand right the issue would occur with any short pulse, even a smooth one.
I also had problem with the reference to cellML : first, we do no see explicitely the URL, and second, I do not understand clearly the sentence: did you provide a non-functionning cellML description? or the authors? or someone else?
1st line of Methods "author's description" do you mean "authors' description"?
Methods, about the initial conditions: you refer to insufficient 2 significant display figures: in the original paper's table 1, I see 3 to 4 such figures.
I could run the code on a mac with OS X 10.10.5, having obsolete versions of matplotlib (1.4.3) numpy (1.9.2) and scipy (0.13.0b1) did not prevent me from generating the figures (the "all" flag did not work, but I could generate them one by one).
The code itself is quite clear and well organized.
There are no space concerns for ReScience, thus I see no reason to exclude some figures from the manuscript.
Concerning fig 1: the peak of the ADP seems to be a bit lower than -65mV, while in the orignal paper, it seems to reach this value. This small discrepancy may explain some of the following ones.
Concerning fig 2 (of the original manuscript and of the replication): there seem to be important differences in panel B, where the minimal value of the I-Ca current seems larger than in the original paper, and some differences also appear for I-P and I-N, which seem to cross in the replication, while I-P is always smaller than I-N in the original. These of course should explain the discrepency in panel D. The cause for these differences should be investigated, and at least be commented in details (in the replication paper, only the panel D difference is highlighted). Given the differences observed in panel B, I am not sure at all than panel D difference is a matter of original label error.
Concerning fig 3 (of the original paper): the ADP for 0.1uS seems to be almost -65mV in the original graph, but lower in the replication. A good reason to include this figure in the man text and to discuss it.
Concerning fig 5 (original)/fig 3 (replication), the last potential plateau (without spike) lasts more than 500ms in the original, less in the replication. Again a discrepancy to be discussed.
In summary, these differences should be investigated, to either identify and solve the problems, or to make them more explicit in the manuscript of the replication.
As for the issue of the differences in the implementation. The authors provided me with their original XPP code which has results much closer to my implementation than the original manuscript, which leads me to believe that the original manuscript has a misrepresented parameter somewhere. I have addressed this in the text.
Thank you again for your comments,
@damiendr: could you give an estimate on when will you be able to give us your final recommendation for the manuscript?
The author has addressed my concerns and the article is much improved — it is especially nice to see all the figures in the text.
Just one minor issue:
In that sentence it's unclear what "some simulations" refers to — I would be more precise and talk about some figures in the original paper.
And one stylistic point:
The wording sounds a bit defensive to me — and you don't need to be, because the overall results do match the original, as you explain. Better to turn this around to a factual statement and say that your implementation proves that the model is valid despite these discrepancies, since you can reproduce all the major claims (freq adaptation etc) using the equations and parameters reported in the original paper.
With these corrections I recommend that the paper be accepted.