Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add functions to compute max_rates and intercepts #1334

Merged
merged 2 commits into from
Jul 19, 2017

Conversation

drasmuss
Copy link
Member

@drasmuss drasmuss commented Jul 12, 2017

Motivation and context:

Each neuron type has a function that computes gains and biases given max_rates and intercepts. This adds a function to each neuron type that does the inverse, computing max_rates and intercepts from gains and biases.

I don't love the name max_rates_intercepts, given the awkward double _, but I went with that for symmetry with gain_bias.

How has this been tested?
Added a new unit test, see test_neurons.py:test_gain_bias

How long should this take to review?

  • Average (neither quick nor lengthy)

Types of changes:

  • New feature (non-breaking change which adds functionality)

Checklist:

  • I have read the CONTRIBUTING.rst document.
  • I have updated the documentation accordingly.
  • I have included a changelog entry.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

nengo/neurons.py Outdated

Parameters
----------
gain : ndarray(dtype=float64)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually we don't specify the dtype in the documentation, do we? And would this function fail if the data type is different? (It seems like it should work with any float length and might work with an int array too if it gets converted to float.)

Copy link
Member Author

@drasmuss drasmuss Jul 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do in gain_bias, which is the parallel to this function. We could remove it from that one as well, but it should probably be the same.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

idk, it just caught my attention as being unusual.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the dtype from all the docstrings in this class

nengo/neurons.py Outdated
Returns
-------
max_rates : ndarray(dtype=float64)
Maximum firing rates of neurons.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might it be better to be specific that it is the maximum firing rates over the interval [-1, 1] (or actually not even that, but the firing rate at 1 given the default implementation, though if a neuron type has a weird tuning curve where the max is at a different spot, it better overwrites this function)?

I assume we are not really specific in this regard in a lot of places, so this might be something better suited in a different PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I would say the place to document what we mean by "max_rate" is in the Ensemble.max_rates docstring (which does contain a good deal more information). I could copy that docstring to here as well though.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds like a good place to me.

nengo/neurons.py Outdated
x_range = np.linspace(-1, 1, 101)
rates = np.asarray([self.rates(np.ones_like(gain) * x, gain, bias)
for x in x_range])
last_zeros = np.maximum(np.argmax(rates > 1e-16, axis=0) - 1, 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this should use rates >= np.finfo(rates.dtype).tiny instead of rates > 1e-16 (which seems arbitrary]?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

finfo.tiny seems a bit too small to me (e.g., ~1e-300 for float64). We could just say rates > 0 if we only want to know if the value is positive, the idea here is that there is some point that the firing rate is "low enough" that we'll call it 0. But yeah, I don't have a good reason for picking 1e-16, other than that is the cutoff we were already using in gain_bias. We could adjust both of those if we think that's too high/low.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would go with rates > 0 if there isn't a reason to be concerned about floating point rounding errors (but I'm not sure about that part).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to 0 instead of 1e-16 here and in gain_bias

nengo/neurons.py Outdated
intercepts = (1 - bias) / gain
max_rates = 1.0 / (self.tau_ref - self.tau_rc * np.log1p(
1.0 / (gain * (intercepts - 1) - 1)))
max_rates = np.nan_to_num(max_rates)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under what conditions do the max_rates contain non-finite values? And if they do, wouldn't it be better to keep them that way and have the caller deal with it if they want to convert them to finite values?

Copy link
Member Author

@drasmuss drasmuss Jul 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max_rates will be non-finite if -1 < gain * (intercepts - 1) - 1 < 0. That can happen if intercepts >= 1 and gain is some smallish value, or if intercepts are in the normal (-1,1) range it only happens if gain is 0. In either of those cases (intercepts >= 1 or gain==0) the firing rate will probably be zero, so converting nan to 0 seemed reasonable.

I probably wouldn't just return nan and let the caller deal with it, since it may not be obvious to them why they're getting nans. We could detect those conditions and raise a warning/exception though, I don't have a strong preference.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to think more about these things, but at the moment it seems to me:

(*) This only applies to LIF neurons (not sigmoid neurons), but this function only applies to LIF neurons.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left the values as nan and added a warning

@drasmuss
Copy link
Member Author

Added some fixups to address Jan's comments.

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made a small fixup for some wording suggestions, including shortening the docstring summary so it fits on one line, as is our convention.

Aside from that, LGTM! In particular, nice test 👍 Will merge shortly, if there are no objections.

tbekolay and others added 2 commits July 19, 2017 14:18
Each neuron type has a method that computes gains and biases
given max_rates and intercepts. This adds a method to each
neuron type that does the inverse, computing max_rates and
intercepts from gains and biases.
@tbekolay tbekolay merged commit 292372c into master Jul 19, 2017
@tbekolay tbekolay deleted the max_rates_intercepts branch July 19, 2017 19:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants