Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate and fire neuron model #1391

Merged
merged 3 commits into from
Mar 2, 2018
Merged

Integrate and fire neuron model #1391

merged 3 commits into from
Mar 2, 2018

Conversation

studywolf
Copy link
Collaborator

@studywolf studywolf commented Dec 13, 2017

Motivation and context:

Adding the common neuron model integrate and fire.

How has this been tested?

Added IntegrateAndFire to the nl set in conftest.py, and pytest test_neurons.py passes all tests.

How long should this take to review?

Quick (70 lines) modular and simple neuron model.

Where should a reviewer start?

nengo/neurons.py

Types of changes:

  • New feature (non-breaking change which adds functionality)

Checklist:

  • I have read the CONTRIBUTING.rst document.
  • I have updated the documentation accordingly.
  • I have included a changelog entry.
  • I have added tests to cover my changes.
  • I have run the test suite locally and all tests passed.

Still to do:

@drasmuss
Copy link
Member

Pushed some suggested changes. Simplifications to the neuron step_math, some minor documentation updates, and gets all the unit tests passing.

Only substantive change is that I made it so that the input current is rectified, rather than the voltage. I think this is the behaviour we want (basically a relu with spikes thrown on the end), and seems to give more accurate output (based on the unit tests).

@astoeckel astoeckel requested review from hunse and removed request for hunse December 13, 2017 20:33
@astoeckel
Copy link

astoeckel commented Dec 13, 2017

This might just be me, but I'm a little concerned regarding the name of this neuron model. In the theoretical neuroscience literature the "L" in "LIF" does not necessarily stand for "Leaky", but may also just mean "Linear". Compare to "QIF" (quadratic) or the "EIF" (exponential) integrate and fire model. All these models are intrinsically leaky, so being "leaky" is often viewed as a given property of these models.

Correspondingly, many people I've interacted with talk about "IF" when they actually mean "LIF" (e.g. also look at the nomenclature in the NEST simulator).

Long story short, I'd propose to explicitly call the model differently, e.g. "RectifiedLinearSpiking" or "NLIF" (non-leaky integrate and fire).

(Also, sorry, accidentally clicked on Eric; blame GitHub for not allowing you to confirm your changes... ;-) )

@studywolf
Copy link
Collaborator Author

Hmm, i'll be honest I've never seen the L in LIF be for linear. Can you link me to some resources? I have totally missed this in my readings...

@astoeckel
Copy link

astoeckel commented Dec 13, 2017

What I said was mostly influenced by Chapter 5 of Wulfram Gerstner's "Neuronal Dynamics" [1], where he talks about general non-linear integrate and fire models. He doesn't use abbreviations, so the "L" being interpreted as linear may just be me, but given the nomenclature for QIF and EIF this is just really suggestive.

[1] http://neuronaldynamics.epfl.ch/online/Ch5.html

@hunse
Copy link
Collaborator

hunse commented Dec 13, 2017

I think LIF referring specifically to the linear leaky integrate-and-fire model is pretty standard.

Having integrate-and-fire (IF) refer specifically to the model added here is something that I used to think was the case, but have recently realized is not. IF tends to refer to the whole family of integrate and fire models, including the LIF model (e.g. here), though it's typically assumed to refer to a linear model without further qualification I think.

Andreas's suggestion of calling this the non-leaky IF model has some precedent, though I think that's typically abbreviated as NIF, not NLIF. So I'd be fine just calling this the NIF model.

@hunse
Copy link
Collaborator

hunse commented Dec 13, 2017

Also, I don't agree with @drasmuss change to rectify the input current. That would mean if you have a subthreshold input sine wave, it's going to make the neuron fire, which doesn't make sense to me. I think we want this model to be just like the LIF model, but without leak, which means we should actually have a min_voltage parameter that controls the floor for the voltage.

Also, I'm not sure if we want to add an NIFRate neuron that is an alias of RectifiedLinear (or perhaps vice-versa, and deprecate RectifiedLinear). Just thinking that it would be nice to have the same naming scheme as for LIF.

@drasmuss
Copy link
Member

Also, I don't agree with @drasmuss change to rectify the input current.

Feel free to play around with whatever implementation you'd like, I don't have any theoretical commitment either way just want unit tests to pass.

@hunse
Copy link
Collaborator

hunse commented Dec 13, 2017

One other thing: I think this model should include an optional refractory period. I'm happy to add this as well as make some of the other changes that have been discussed.

@drasmuss
Copy link
Member

drasmuss commented Dec 13, 2017

Not trying to dissuade you from making the changes, but just a couple thoughts before you do. I think the goal with this neuron type is not so much to make a non-leaky version of LIF, but to make a spiking version of RectifiedLinear. So, to me, things like the behaviour you describe ("a subthreshold input sine wave, it's going to make the neuron fire") are the expected behaviour, since that is the output you would get from a RectifiedLinear neuron with the same input. Similarly, we did think about adding a refractory period, but decided against it since there isn't a parallel in the RectifiedLinear neuron.

So, not trying to tell you not to work on it, but just didn't want you putting too much work into it without warning about why we made it the way we did 😄.

@hunse
Copy link
Collaborator

hunse commented Dec 13, 2017

Okay, added my changes. The only thing that's still not working is test_neurons.test_gain_bias is failing when generic=True for the NIF neuron. I'm not sure why.

@drasmuss
Copy link
Member

Oops, sorry, my comment was too late ⏲

@studywolf
Copy link
Collaborator Author

Hm, yeah I should have mentioned that, but the goal, like @drasmuss said, is a spiking version of RectifiedLinear. I'm good with having whatever other options, but I would push for default behaviour being a spiking analogue of RectifiedLinear, since that was the original point.

This might also influence my aversion to the name non-leaky integrate and fire, since it suggests a different motive. Tangentially, naming by what it doesn't do seems kind of weird, like calling LIF instead non-adaptive leaky integrate and fire.

@hunse
Copy link
Collaborator

hunse commented Dec 14, 2017

I'm good with having whatever other options, but I would push for default behaviour being a spiking analogue of RectifiedLinear, since that was the original point.

My two reservations with having zero refractory period as the default are that a) it's less biologically plausible (and while I'm fine having less bio-plausible things in Nengo, I think using them should be a conscious decision, not the default), and b) it could allow neurons to fire faster than 1000 Hz, and since we don't allow multiple spikes per timestep, this could lead to a mismatch between the rate model (which can have arbitrarily high rates) and the spiking model (which can't).

@drasmuss
Copy link
Member

drasmuss commented Dec 14, 2017

a) it's less biologically plausible (and while I'm fine having less bio-plausible things in Nengo, I think using them should be a conscious decision, not the default)

For my use case (in nengo_dl) and Travis' (small embedded platforms), that was the main goal of this neuron type, a minimal, bare-bones spiking neuron that would sacrifice biological plausibility for simplicity/performance. So it would be the act of choosing this neuron type that would be users "buying in" to that tradeoff (in the same way they do when choosing LIFRate vs LIF vs RectifiedLinear).

b) it could allow neurons to fire faster than 1000 Hz, and since we don't allow multiple spikes per timestep, this could lead to a mismatch between the rate model (which can have arbitrarily high rates) and the spiking model (which can't).

The original implementation did allow multiple spikes per timestep.

Some general thoughts:

My main question is whether there is a demand for the NIF neuron, versus the simplified implementation we started with (I'll call that RectifiedSpiking). We know there are two use cases for RectifiedSpiking (mine and Travis'), but I've never heard someone ask for NIF before this. Of course RectifiedSpiking is a subset of NIF, but if we went with the NIF approach I would end up implementing one neuron model for NIF(tau_ref=0) (which would be equivalent to RectifiedSpiking) and another neuron model for general NIF. And I think Travis might only be able to support NIF(tau_ref=0)? Which isn't the end of the world. But if we don't need the extra complexity of NIF, it'd be nice to avoid it.

@hunse
Copy link
Collaborator

hunse commented Dec 14, 2017

The original implementation did allow multiple spikes per timestep.

Fair enough. I didn't notice this. Does Nengo officially support multiple spikes per timestep? I'm just thinking on some neuromorphic hardware, this might not be feasible.

If we're moving away from a name that's some variation of IF, then I'm more okay with the original proposal. I wanted to add the refractory period for the reasons I mentioned, but also because I used something like the NIF neuron I have here in my thesis work, and I didn't understand why you would have something more specific when you could have it more general.

So I'm fine with the original implementation. I'm still not sold on rectifying the input signal, that seems really weird to me. Why did you need that to pass some tests?

@tbekolay
Copy link
Member

Does Nengo officially support multiple spikes per timestep?

Nothing in the reference backend would have a problem with it. Neuromorphic hardware's always going to have some oddities, I think.

@drasmuss
Copy link
Member

drasmuss commented Dec 14, 2017

So I'm fine with the original implementation. I'm still not sold on rectifying the input signal, that seems really weird to me. Why did you need that to pass some tests?

I didn't look into it in a lot of detail, but my intuition is that it allows the neuron to respond more rapidly to changes in input. E.g. for an input like Piecewise({0: 1, 0.5: -1, 1: 1}), the neuron's spike rate won't be delayed at all in the 3rd stage due to the negative input in the 2nd stage (more closely approximating RectifiedLinear).

Edit: Perhaps a better example is the one you raised, of e.g. a sin wave with zero mean. With a small enough amplitude, that could cause the NIF neuron to never spike, whereas RectifiedLinear (and RectifiedSpiking) would have nonzero output.

@drasmuss
Copy link
Member

If we're moving away from a name that's some variation of IF

And yeah, I definitely agree that this is a good idea. Based on this discussion, probably something that more clearly ties it to RectifiedLinear? SpikingRectifiedLinear seems a bit long, but perhaps SpikingReLU? (and possibly add/rename RectifiedLinear -> ReLU for consistency?).

@drasmuss drasmuss self-assigned this Jan 31, 2018
@drasmuss drasmuss removed their assignment Jan 31, 2018
@drasmuss
Copy link
Member

Cleaned up the history based on the above discussion, and added an amplitude parameter to RectifiedLinear (same idea as in LIF).

@arvoelke
Copy link
Contributor

arvoelke commented Feb 1, 2018

I only just noticed this PR so sorry if it's been addressed above or elsewhere, but what would be the difference between this and, say, nengo.LIF(tau_rc=1e12, tau_ref=0)? It looks like they should be the same except the latter will properly handle the overshoot, while the implementation in this PR appears to assume there's zero overshoot on each spike.

for neuron_type in (nengo.LIF(tau_rc=1e12, tau_ref=0), nengo.RectifiedLinear()):
    with nengo.Network() as model:
        x = nengo.Ensemble(100, 1, neuron_type=neuron_type)
    with nengo.Simulator(model) as sim:
        u = np.linspace(-1, 1, 1000)
        a = nengo.builder.ensemble.get_activities(sim.data[x], x, u[:, None])
        plt.figure()
        plt.title(repr(neuron_type))
        plt.plot(u, a)
        plt.show()

lif_inftau

relu

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

The main difference is the efficiency of the implementation. As mentioned above, the idea here is that this is a very lightweight spiking neuron implementation for cases where computational simplicity is more important than biological detail. Also note that this implementation does take into account overshoot (by not resetting the voltage to 0 after a spike).

@arvoelke
Copy link
Contributor

arvoelke commented Feb 1, 2018

If computational efficiency is the main difference, perhaps nengo.LIF could change its reference implementation to do this whenever tau_rc is sufficiently large? It currently breaks when it's too large anyway (this may warrant a separate issue).

Then this new neuron model can simply derive nengo.LIF(tau_rc=np.inf, tau_ref=0). This reduces the surface area for testing/maintenance/other-backends, and directly exposes the relationship to the user.

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

I tend to dislike implementations where the behaviour qualitatively changes based on quantitative changes in parameter values. Even if it works nicely most of the time, it tends to lead to edge cases where the implementation unexpectedly changes on the user in a non-obvious way. If I have my code and I change nengo.LIF to nengo.SpikingRectifiedLinear, it's clear what I can expect to happen. If I have my code and I change nengo.LIF(tau_rc=1) to nengo.LIF(tau_rc=1e5) to nengo.LIF(tau_rc=1e10) it's not clear what the behaviour I want is, or what behaviour nengo will guarantee me.

nengo.LIF(tau_rc=np.inf) is a bit better, since it's clear that there's a qualitative change there. However, that does make it more difficult for backends to selectively support neuron types. E.g., if I have a backend that supports nengo.SpikingRectifiedLinear but not nengo.LIF, I now need to implement a nengo.LIF model that only supports one parameter setting (tau_rc=np.inf). That will be pretty confusing to the user, if they try to set some other tau_rc value.

@arvoelke
Copy link
Contributor

arvoelke commented Feb 1, 2018

Even if it works nicely most of the time, it tends to lead to edge cases where the implementation unexpectedly changes on the user in a non-obvious way.

I would argue that this will not happen. If done correctly the transition should be numerically indistinguishable (right around the point that LIF currently breaks).

However, that does make it more difficult for backends to selectively support neuron types.

I shouldn't have used the word alias. I edited my previous post. If it derives LIF, then it is still an instance of the name SpikingRectifiedLinear or what-have-you. And so the backend may still selectively support this type.

Anyway, I don't need to push this any further. That's all I had to suggest.

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

I added a commit where LIF/LIFRate will fall back on SpikingRectifiedLinear/RectifiedLinear for large tau_rc. Not exactly what you suggested, but same behaviour in the end. It just seemed easier to keep the separate implementations in separate classes, rather than putting two implementations in one class and having a separate alias to one of those implementations. Let me know if that looks good to you.

@jgosmann
Copy link
Collaborator

jgosmann commented Feb 1, 2018

I have only looked at this last commit out of interest. Manually reassigning the methods seems somewhat brittle to me. Maybe it is better to use __new__ to return an actual (Spiking)RectifiedLinear instance for large tau_rc? It seems to me that this might also better interact with backends by turning the neuron type into an actual instance of (Spiking)RecitifiedLinear and thus using that code across all backends (if they implement that neuron type).

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

I did it this way so that the returned neuron type would still look like the same type to the user. E.g. if the user did a = nengo.Ensemble(..., neuron_type=nengo.LIF(tau_rc=1e20), I think they would be surprised if a.neuron_type was not nengo.LIF.

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

Although this did just make me realize that if the user changes tau_rc after the fact, that will have unexpected effects. So I guess we're kind of forced to go with the __new__ approach, or implement some complicated behaviour with properties which would be pretty messy. Or just not have the fallback behaviour (this would be my preferred solution since it is the simplest, and it seems like a pretty edge use case).

@jgosmann
Copy link
Collaborator

jgosmann commented Feb 1, 2018

That reminds me that I also wanted to suggest to produce a warning in case of this automatic conversion. That might be a good idea with either approach (and it is easily silenced by explicitely switching to the RectifiedLinear type).

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

Yeah I wouldn't mind that either (the __new__ approach with an explicit warning to the user about what is going on)

@drasmuss
Copy link
Member

drasmuss commented Feb 1, 2018

Actually I forgot neuron parameters are read-only, so we don't have to worry about tau_rc changing. So I guess I'm back to the preference ranking 1. no fallback behaviour, 2. monkey-patching methods (slightly preferred since it will be less surprising to the user), 3. __new__ method with a warning.

@arvoelke
Copy link
Contributor

arvoelke commented Feb 2, 2018

I think I just realized an important difference: the gains on the neurons have to be scaled up to cancel out the tau_rc. I mistakenly thought the gains were invariant to tau_rc.

for neuron_type in (nengo.LIF(tau_rc=1e12, tau_ref=0), nengo.RectifiedLinear()):
    with nengo.Network() as model:
        x = nengo.Ensemble(100, 1, neuron_type=neuron_type)
    with nengo.Simulator(model) as sim:
        print(repr(neuron_type), np.mean(sim.data[x].gain))
'LIF(tau_rc=1e+12, tau_ref=0)', 1061755228946271.2
'RectifiedLinear()', 667.00285850680416

I think this is essentially why the LIF breaks for large enough tau_rc.

I think this does turn out to be an important/distinguishable difference at the transition point, especially if users are connecting directly into the neurons or even just looking at sim.data[x].gain. I don't know if there's a way to salvage this difference. Sorry if I have wasted your time with this. Evidently I hadn't thought through this connection properly!

@drasmuss
Copy link
Member

drasmuss commented Feb 2, 2018

I just went ahead and reverted that commit, since it seemed to be introducing more issues than it solved 👍

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This branch has ended up being used a lot in the past few months. If someone else could give this a quick review, I'll clean up the history and merge.

@tbekolay
Copy link
Member

Also note that the Appveyor failure will be fixed once this is rebased to master.

@drasmuss
Copy link
Member

I've used this successfully in a couple projects now, so it looks good to me (although I did touch some of the code).

@tbekolay tbekolay force-pushed the integrate_and_fire branch 2 times, most recently from a5671aa to 213d98e Compare March 2, 2018 20:18
@tbekolay tbekolay merged commit 84a7181 into master Mar 2, 2018
@tbekolay tbekolay deleted the integrate_and_fire branch March 2, 2018 20:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants