-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added amplitude to LIF #1325
Added amplitude to LIF #1325
Conversation
I usually do this by changing ens = nengo.Ensemble(1, 1, max_rates=[100], neuron_type=nengo.LIFRate(amplitude=0.1)) I'd do ens = nengo.Ensemble(1, 1, max_rates=[10], neuron_type=nengo.LIFRate()) Obviously not exactly the same (the former is scaling the output of the nonlinearity, the latter is scaling the input), but gets you a similar result. Another option is to incorporate the So even though I would actually find the
That being said, I'm not strongly opposed, so if other people like it I'm totally fine with adding |
Could this be made a parameter / feature of the learning rule(s) instead? |
The problem with Incorporating This is also kind of the reason that I wouldn't want to have it be part of the learning rule, because whether you want this scaling or not really depends on the neuron model that you're using. |
How does this affect different backends? |
Like pretty much anything in the Nengo API, they could decide whether to support it or not. For the most part, I don't think it should be too hard to support, because in essence it can be folded into the connection weights when you're actually computing it (so all the computations involved would be things current hardware can do). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor things, otherwise LGTM.
nengo/neurons.py
Outdated
@@ -231,11 +231,13 @@ class LIFRate(NeuronType): | |||
|
|||
tau_rc = NumberParam('tau_rc', low=0, low_open=True) | |||
tau_ref = NumberParam('tau_ref', low=0) | |||
amplitude = NumberParam('amplitude', low=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this should be low_open=True
? I suppose one could set the amplitude to 0 without crashing Nengo, but it seems like it wouldn't make much sense.
Also, this parameter needs to be added to the docstring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
nengo/neurons.py
Outdated
@@ -298,8 +300,9 @@ class LIF(LIFRate): | |||
|
|||
min_voltage = NumberParam('min_voltage', high=0) | |||
|
|||
def __init__(self, tau_rc=0.02, tau_ref=0.002, min_voltage=0): | |||
super(LIF, self).__init__(tau_rc=tau_rc, tau_ref=tau_ref) | |||
def __init__(self, tau_rc=0.02, tau_ref=0.002, amplitude=1, min_voltage=0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically this breaks backwards compatibility if anyone provided min_voltage
as a positional argument. But I'm fine with that, I think min_voltage
was rarely used if at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, people shouldn't rely on the position of keyword arguments. But there's also no need to have amplitude
in front of min_voltage
, so I'll switch them.
I've implemented @jgosmann's recommended changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @hunse! :)
This can be useful when working with learning, as it allows the output of the neuron to be in a more normalized range.
Motivation and context:
Allow scaling on the output of LIF neurons. This can be useful when working with learning, to allow the output of the neuron to be in a more normalized range.
How has this been tested?
I added a unit test that checks that the scaled output of a normal neuron (amplitude == 1) matches that that of the neuron with amplitude set, both for the static rates and the dynamic spikes.
How long should this take to review?
Types of changes:
Checklist: