-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove deprecated models iaf_psc_alpha_canon and pp_pop_psc_delta #2294
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but I tested the two examples and they do not run for me. I think you need to change tau_syn
to tau_syn_{ex, in}
somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! The examples take a bit more time to run (brunel_ps
have a simulation time of 267.26 s with iaf_psc_alpha_ps
and 94.99 s with the master version, likewise ArtificialSyncrony
is also slower with the precise model), but I guess this is to be expected with a precise model? See also issue #2017 (the times there are longer, if I remember correctly I ran the examples during a hackathon, and had video running simultaneously, slowing everything down).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving on the condition that we are OK with the runtime performance regression. I wasn't privy to the discussion about why these models were marked as deprecated in the first place.
@stinebuu: many thanks for measuring execution times. I'm actually not sure if this slow-down is to be expected, as the deprecated (and now removed) model When replacing @suku248 can you please quickly comment on this? |
This reverts commit 81a79ad.
Regarding @stinebuu 's observation of slower runtimes when using |
@suku248, many thanks for these insights. I'm really not sure how to proceed here. @heplesser, @abigailm, @diesmann: any thoughts on this? Is there anyone currently working on the precise models who could investigate further? |
I will have another look at the performance problems. |
Thanks HEP. Whether _ps or _canon are faster depends on the computation time step and the requested accuracy. The general observation is that for a given accuracy _ps is faster than _canon because _ps can reach this accuracy already at larger computation time steps. With a fixed computation time step and a low required accuracy _canon might be faster because there are fewer operations to be carried out. The Hanuschkin paper has the details on this relationship. |
I have been doing some digging. It seams like the problem is the use of When I use canon's way of doing things in the ps model (see the change here) I get the following times:
( I will try updating |
Merging this still depends on more discussion about the performance implications. Therefore removing the milestone. |
Pull request automatically marked stale! |
Hi everyone, has more discussion lead to new insights? – Just a friendly ping. |
I have now re-run benchmarks for the
The
Notice also that the new I therefore believe that the qualities of the |
In Morrison A., Straube S., Plesser HE., Diesmann M. (2007) Exact Subthreshold Integration with Continuous Spike Times in Discrete-Time Neural Network Simulations. Neural Computation 19:47–79. We argue that stating the time required to propagate the implementation of a particular neuron model has no meaning. If the integration error is not specified an implementation can be arbitrarily fast, for example by doing nothing. Therefore one first needs to specify an accuracy goal and then select the implementation which achieves this in the shortest time. In the case of spiking neuron models we have two measures: the precision of the spike times and the number of missed spikes. The _canon model was included in the code base for NEST not for production but for documenting the results of the publication above and as a reference. When comparing _ps implementations among each other and with grid-constrained methods it also needs to be considered that for modern _ps algorithms the computation time step h can be increased because these models jump from incoming spike to incoming spike anyway. Later results are summarized in: Hanuschkin A, Kunkel S, Helias M, Morrison A and Diesmann M (2010) A general and efficient method for incorporating precise spike times in globally time-driven simulations. Front. Neuroinform. 4:113 Krishnan J, Porta Mana P, Helias M, Diesmann M and Di Napoli E (2018) Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons. Front. Neuroinform. 11:75 |
The figure below shows the median spike time error for 288 spikes recorded over 10 s from a single neuron driven by precise Poisson spike trains mimicking a
To achieve the same accuracy obtained with
Accuracy taken into account, |
For reference, attached is the notebook used to create the plot above. |
Here's a small and hopefully non-controversial one removing deprecated models. I also cleaned up the handling of connection model flags a bit to get shorter and more readable lines.