-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate and fire neuron model #1391
Conversation
Pushed some suggested changes. Simplifications to the neuron Only substantive change is that I made it so that the input current is rectified, rather than the voltage. I think this is the behaviour we want (basically a relu with spikes thrown on the end), and seems to give more accurate output (based on the unit tests). |
f22c72c
to
d80b0b2
Compare
This might just be me, but I'm a little concerned regarding the name of this neuron model. In the theoretical neuroscience literature the "L" in "LIF" does not necessarily stand for "Leaky", but may also just mean "Linear". Compare to "QIF" (quadratic) or the "EIF" (exponential) integrate and fire model. All these models are intrinsically leaky, so being "leaky" is often viewed as a given property of these models. Correspondingly, many people I've interacted with talk about "IF" when they actually mean "LIF" (e.g. also look at the nomenclature in the NEST simulator). Long story short, I'd propose to explicitly call the model differently, e.g. "RectifiedLinearSpiking" or "NLIF" (non-leaky integrate and fire). (Also, sorry, accidentally clicked on Eric; blame GitHub for not allowing you to confirm your changes... ;-) ) |
Hmm, i'll be honest I've never seen the L in LIF be for linear. Can you link me to some resources? I have totally missed this in my readings... |
What I said was mostly influenced by Chapter 5 of Wulfram Gerstner's "Neuronal Dynamics" [1], where he talks about general non-linear integrate and fire models. He doesn't use abbreviations, so the "L" being interpreted as linear may just be me, but given the nomenclature for QIF and EIF this is just really suggestive. |
I think LIF referring specifically to the linear leaky integrate-and-fire model is pretty standard. Having integrate-and-fire (IF) refer specifically to the model added here is something that I used to think was the case, but have recently realized is not. IF tends to refer to the whole family of integrate and fire models, including the LIF model (e.g. here), though it's typically assumed to refer to a linear model without further qualification I think. Andreas's suggestion of calling this the non-leaky IF model has some precedent, though I think that's typically abbreviated as NIF, not NLIF. So I'd be fine just calling this the NIF model. |
Also, I don't agree with @drasmuss change to rectify the input current. That would mean if you have a subthreshold input sine wave, it's going to make the neuron fire, which doesn't make sense to me. I think we want this model to be just like the LIF model, but without leak, which means we should actually have a Also, I'm not sure if we want to add an |
Feel free to play around with whatever implementation you'd like, I don't have any theoretical commitment either way just want unit tests to pass. |
One other thing: I think this model should include an optional refractory period. I'm happy to add this as well as make some of the other changes that have been discussed. |
Not trying to dissuade you from making the changes, but just a couple thoughts before you do. I think the goal with this neuron type is not so much to make a non-leaky version of LIF, but to make a spiking version of RectifiedLinear. So, to me, things like the behaviour you describe ("a subthreshold input sine wave, it's going to make the neuron fire") are the expected behaviour, since that is the output you would get from a So, not trying to tell you not to work on it, but just didn't want you putting too much work into it without warning about why we made it the way we did 😄. |
Okay, added my changes. The only thing that's still not working is |
Oops, sorry, my comment was too late ⏲ |
Hm, yeah I should have mentioned that, but the goal, like @drasmuss said, is a spiking version of This might also influence my aversion to the name non-leaky integrate and fire, since it suggests a different motive. Tangentially, naming by what it doesn't do seems kind of weird, like calling LIF instead non-adaptive leaky integrate and fire. |
My two reservations with having zero refractory period as the default are that a) it's less biologically plausible (and while I'm fine having less bio-plausible things in Nengo, I think using them should be a conscious decision, not the default), and b) it could allow neurons to fire faster than 1000 Hz, and since we don't allow multiple spikes per timestep, this could lead to a mismatch between the rate model (which can have arbitrarily high rates) and the spiking model (which can't). |
For my use case (in
The original implementation did allow multiple spikes per timestep. Some general thoughts: My main question is whether there is a demand for the |
Fair enough. I didn't notice this. Does Nengo officially support multiple spikes per timestep? I'm just thinking on some neuromorphic hardware, this might not be feasible. If we're moving away from a name that's some variation of IF, then I'm more okay with the original proposal. I wanted to add the refractory period for the reasons I mentioned, but also because I used something like the NIF neuron I have here in my thesis work, and I didn't understand why you would have something more specific when you could have it more general. So I'm fine with the original implementation. I'm still not sold on rectifying the input signal, that seems really weird to me. Why did you need that to pass some tests? |
Nothing in the reference backend would have a problem with it. Neuromorphic hardware's always going to have some oddities, I think. |
I didn't look into it in a lot of detail, but my intuition is that it allows the neuron to respond more rapidly to changes in input. E.g. for an input like Edit: Perhaps a better example is the one you raised, of e.g. a |
And yeah, I definitely agree that this is a good idea. Based on this discussion, probably something that more clearly ties it to |
8b718f6
to
458cc61
Compare
Cleaned up the history based on the above discussion, and added an amplitude parameter to |
I only just noticed this PR so sorry if it's been addressed above or elsewhere, but what would be the difference between this and, say, for neuron_type in (nengo.LIF(tau_rc=1e12, tau_ref=0), nengo.RectifiedLinear()):
with nengo.Network() as model:
x = nengo.Ensemble(100, 1, neuron_type=neuron_type)
with nengo.Simulator(model) as sim:
u = np.linspace(-1, 1, 1000)
a = nengo.builder.ensemble.get_activities(sim.data[x], x, u[:, None])
plt.figure()
plt.title(repr(neuron_type))
plt.plot(u, a)
plt.show() |
The main difference is the efficiency of the implementation. As mentioned above, the idea here is that this is a very lightweight spiking neuron implementation for cases where computational simplicity is more important than biological detail. Also note that this implementation does take into account overshoot (by not resetting the voltage to 0 after a spike). |
If computational efficiency is the main difference, perhaps Then this new neuron model can simply derive |
I tend to dislike implementations where the behaviour qualitatively changes based on quantitative changes in parameter values. Even if it works nicely most of the time, it tends to lead to edge cases where the implementation unexpectedly changes on the user in a non-obvious way. If I have my code and I change
|
I would argue that this will not happen. If done correctly the transition should be numerically indistinguishable (right around the point that
I shouldn't have used the word alias. I edited my previous post. If it derives Anyway, I don't need to push this any further. That's all I had to suggest. |
I added a commit where LIF/LIFRate will fall back on SpikingRectifiedLinear/RectifiedLinear for large |
I have only looked at this last commit out of interest. Manually reassigning the methods seems somewhat brittle to me. Maybe it is better to use |
I did it this way so that the returned neuron type would still look like the same type to the user. E.g. if the user did |
Although this did just make me realize that if the user changes |
That reminds me that I also wanted to suggest to produce a warning in case of this automatic conversion. That might be a good idea with either approach (and it is easily silenced by explicitely switching to the |
Yeah I wouldn't mind that either (the |
Actually I forgot neuron parameters are read-only, so we don't have to worry about |
I think I just realized an important difference: the gains on the neurons have to be scaled up to cancel out the for neuron_type in (nengo.LIF(tau_rc=1e12, tau_ref=0), nengo.RectifiedLinear()):
with nengo.Network() as model:
x = nengo.Ensemble(100, 1, neuron_type=neuron_type)
with nengo.Simulator(model) as sim:
print(repr(neuron_type), np.mean(sim.data[x].gain))
I think this is essentially why the I think this does turn out to be an important/distinguishable difference at the transition point, especially if users are connecting directly into the neurons or even just looking at |
I just went ahead and reverted that commit, since it seemed to be introducing more issues than it solved 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This branch has ended up being used a lot in the past few months. If someone else could give this a quick review, I'll clean up the history and merge.
Also note that the Appveyor failure will be fixed once this is rebased to master. |
I've used this successfully in a couple projects now, so it looks good to me (although I did touch some of the code). |
a5671aa
to
213d98e
Compare
213d98e
to
84a7181
Compare
Motivation and context:
Adding the common neuron model integrate and fire.
How has this been tested?
Added
IntegrateAndFire
to thenl
set inconftest.py
, andpytest test_neurons.py
passes all tests.How long should this take to review?
Quick (70 lines) modular and simple neuron model.
Where should a reviewer start?
nengo/neurons.py
Types of changes:
Checklist:
Still to do: