-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add functions to compute max_rates and intercepts #1334
Conversation
adbb7bc
to
3c78fa3
Compare
nengo/neurons.py
Outdated
|
||
Parameters | ||
---------- | ||
gain : ndarray(dtype=float64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually we don't specify the dtype in the documentation, do we? And would this function fail if the data type is different? (It seems like it should work with any float length and might work with an int array too if it gets converted to float.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do in gain_bias, which is the parallel to this function. We could remove it from that one as well, but it should probably be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
idk, it just caught my attention as being unusual.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the dtype from all the docstrings in this class
nengo/neurons.py
Outdated
Returns | ||
------- | ||
max_rates : ndarray(dtype=float64) | ||
Maximum firing rates of neurons. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might it be better to be specific that it is the maximum firing rates over the interval [-1, 1] (or actually not even that, but the firing rate at 1 given the default implementation, though if a neuron type has a weird tuning curve where the max is at a different spot, it better overwrites this function)?
I assume we are not really specific in this regard in a lot of places, so this might be something better suited in a different PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I would say the place to document what we mean by "max_rate" is in the Ensemble.max_rates
docstring (which does contain a good deal more information). I could copy that docstring to here as well though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds like a good place to me.
nengo/neurons.py
Outdated
x_range = np.linspace(-1, 1, 101) | ||
rates = np.asarray([self.rates(np.ones_like(gain) * x, gain, bias) | ||
for x in x_range]) | ||
last_zeros = np.maximum(np.argmax(rates > 1e-16, axis=0) - 1, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if this should use rates >= np.finfo(rates.dtype).tiny
instead of rates > 1e-16
(which seems arbitrary]?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
finfo.tiny
seems a bit too small to me (e.g., ~1e-300 for float64). We could just say rates > 0
if we only want to know if the value is positive, the idea here is that there is some point that the firing rate is "low enough" that we'll call it 0. But yeah, I don't have a good reason for picking 1e-16
, other than that is the cutoff we were already using in gain_bias
. We could adjust both of those if we think that's too high/low.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would go with rates > 0
if there isn't a reason to be concerned about floating point rounding errors (but I'm not sure about that part).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to 0 instead of 1e-16 here and in gain_bias
nengo/neurons.py
Outdated
intercepts = (1 - bias) / gain | ||
max_rates = 1.0 / (self.tau_ref - self.tau_rc * np.log1p( | ||
1.0 / (gain * (intercepts - 1) - 1))) | ||
max_rates = np.nan_to_num(max_rates) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under what conditions do the max_rates
contain non-finite values? And if they do, wouldn't it be better to keep them that way and have the caller deal with it if they want to convert them to finite values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max_rates
will be non-finite if -1 < gain * (intercepts - 1) - 1 < 0
. That can happen if intercepts >= 1
and gain
is some smallish value, or if intercepts are in the normal (-1,1) range it only happens if gain
is 0. In either of those cases (intercepts >= 1
or gain==0
) the firing rate will probably be zero, so converting nan
to 0 seemed reasonable.
I probably wouldn't just return nan
and let the caller deal with it, since it may not be obvious to them why they're getting nans
. We could detect those conditions and raise a warning/exception though, I don't have a strong preference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have to think more about these things, but at the moment it seems to me:
- Intercepts should never be >= 1 (*) or if they are it should be treated as an error (not sure if that means an exception or non-finite values). Compare Raise exception on invalid gains. #1248, Degenerate tuning curves when intercept >= 1 #1212, Warn about intercepts out of radius #1231, Raise exception on invalid intercepts. #1243 (especially this comment).
- The gain should never be <= 0 or if it is it should also be treated as an error. Potentially I could see returning max_rates of 0 for a gain of 0, but keep the intercepts as
nan
(because there isn't a defined intercepts and returning 0 seems to me misleading, whilenan
at least indicate that something's weird). Or maybe an exception is the better solution.
(*) This only applies to LIF neurons (not sigmoid neurons), but this function only applies to LIF neurons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left the values as nan
and added a warning
Added some fixups to address Jan's comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made a small fixup for some wording suggestions, including shortening the docstring summary so it fits on one line, as is our convention.
Aside from that, LGTM! In particular, nice test 👍 Will merge shortly, if there are no objections.
49162bd
to
b2b7a90
Compare
Each neuron type has a method that computes gains and biases given max_rates and intercepts. This adds a method to each neuron type that does the inverse, computing max_rates and intercepts from gains and biases.
ee842cf
to
292372c
Compare
Motivation and context:
Each neuron type has a function that computes gains and biases given max_rates and intercepts. This adds a function to each neuron type that does the inverse, computing max_rates and intercepts from gains and biases.
I don't love the name
max_rates_intercepts
, given the awkward double_
, but I went with that for symmetry withgain_bias
.How has this been tested?
Added a new unit test, see
test_neurons.py:test_gain_bias
How long should this take to review?
Types of changes:
Checklist: