Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nengo_dl builders for Loihi neurons #140

Merged
merged 6 commits into from Feb 22, 2019

Conversation

Projects
None yet
3 participants
@hunse
Copy link
Contributor

hunse commented Nov 12, 2018

This adds nengo_dl TensorFlow builders for the LoihiLIF and LoihiSpikingRectifiedLinear neuron types. This allows nengo_dl networks to train with tuning curves like those of the chip neurons, allowing for more accurate conversion of ANNs for the chip.

Based off #133

@hunse hunse force-pushed the keyword-accuracy branch from dfb2364 to f7b1be4 Nov 12, 2018

@drasmuss drasmuss self-requested a review Nov 13, 2018

@drasmuss drasmuss changed the base branch from master to conv2d-prep Nov 13, 2018

Show resolved Hide resolved nengo_loihi/neurons.py Outdated
loihi_rates = tf.where(J > self.zero, loihi_rates, self.zeros)

# --- compute LIF rates (for backward pass)
if self.config.lif_smoothing:

This comment has been minimized.

@drasmuss

drasmuss Nov 13, 2018

Contributor

Just brainstorming for the future, but it might be interesting to try a version of loihi_rates where the steps are smoothed a bit, rather than throwing out the steps altogether. Same general idea as SoftLIF, but applying smoothing to all the discontinuities in the LoihiLIF function instead of just the threshold.

This comment has been minimized.

@hunse

hunse Nov 13, 2018

Author Contributor

That's an interesting idea. The main reason for the SoftLIF is to smooth the derivative, and here I'm not using the steps at all in the derivative, so the motivation would be a bit different. In practice, though, I think the steps of the tuning curve would get smoothed somewhat in practice due to noise in the network. If you train with noise (which I've been doing for some of the keyword network stuff), then you'll actually get some smoothing of the steps like you're describing, however it will be probabilistic. Doing it deterministically like you suggest would have the benefit of making it more reliable for the network, and therefore allow it to take advantage of it better. You could even (theoretically) use a derivative that takes into account the smoothed steps (maybe that's what you were thinking all along).

The downsides are that you'd have another hyperparameter to tune (you could try to find a good value by estimating the amount of noise on the signals into each layer, and choosing the amount of smoothing to mimic this). The other problem is that I think a simple analytic equation for it (which I think is what we need for TensorFlow) might be difficult to find, keeping in mind that we need it to be relatively easily computable on a GPU. The SoftLIF wasn't trivial to come up with, and that is a considerably simpler problem with only one discontinuity. Applying the same type of SoftLIF smoothing to each of the discontinuities in the stepped Loihi tuning curve would result in an overly-complex function, I think. Maybe there's a way to do it efficiently, though.

This comment has been minimized.

@drasmuss

drasmuss Nov 13, 2018

Contributor

You could even (theoretically) use a derivative that takes into account the smoothed steps (maybe that's what you were thinking all along).

Yeah that's what I was thinking. The nice thing about this implementation is that we're using Loihi-like neurons on the forwards pass, but it would be nice if we could use them on the backwards pass as well. You're right though that I'm not sure what a compact expression for that would be.

Show resolved Hide resolved nengo_loihi/tests/test_neurons.py Outdated
Show resolved Hide resolved nengo_loihi/tests/test_neurons.py
Show resolved Hide resolved nengo_loihi/neurons.py Outdated

@tbekolay tbekolay force-pushed the conv2d-prep branch 3 times, most recently from 1c123d7 to 95a97b8 Nov 29, 2018

@tbekolay tbekolay changed the base branch from conv2d-prep to master Dec 5, 2018

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 29, 2019

  • Enforce same noise model for all grouped neuron operators, or group them only if they share a noise model

@hunse hunse force-pushed the keyword-accuracy branch from f7b1be4 to 700301e Jan 29, 2019

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 30, 2019

A number of tests have started failing for weird reasons. test_learning.test_multiple_pes fails sporadically; hopefully with #151 it will be more predictable.

There's also failing for the static and docs tests, because of weird problems with the setup for CI. Not sure if it's because something changed with packages in pip, or how the Travis servers are initialized, or what.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 30, 2019

Also, the codecov is bad because the new tests are importorskip('nengo_dl'), and it appears they're not getting run. But nengo_dl should be installed for the tests, I thought. I think it might have to do with the version (I might be using some newer nengo_dl features).

@hunse hunse force-pushed the keyword-accuracy branch 3 times, most recently from 6ae53b7 to 3a914bb Jan 31, 2019

@drasmuss drasmuss force-pushed the keyword-accuracy branch from 3a914bb to fa90c22 Feb 4, 2019

@drasmuss
Copy link
Contributor

drasmuss left a comment

I added a commit removing the intermediate RCNoiseBuilder class. My thinking is that this keeps things a bit simpler, and we can always add more complexity to the class hierarchy if it becomes necessary in the future (although I suspect that this part of the code will be a relatively edge use case, and if it did become significantly more complicated we'd probably want to factor it out into nengo-dl or nengo-extras or something).

Other than that, just two minor comments/questions.

Show resolved Hide resolved nengo_loihi/neurons.py Outdated
Show resolved Hide resolved nengo_loihi/neurons.py Outdated
@hunse
Copy link
Contributor Author

hunse left a comment

Overall I like the changes in fa90c22. That hasattr call is the one thing that bugs me.

Show resolved Hide resolved nengo_loihi/neurons.py Outdated
Show resolved Hide resolved nengo_loihi/tests/test_neurons.py Outdated
@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Feb 5, 2019

All LGTM now!

@hunse hunse force-pushed the keyword-accuracy branch 2 times, most recently from 6e49c11 to f59ebad Feb 5, 2019

This was referenced Feb 8, 2019

@tbekolay tbekolay force-pushed the keyword-accuracy branch from 676429f to e10d314 Feb 9, 2019

@tbekolay
Copy link
Member

tbekolay left a comment

I made some pretty significant changes to how the Nengo DL builders are defined, installed, and tested in 82f389b. With those changes this LGTM; probably best if @hunse or @drasmuss takes a look at that commit before I merge this in.

Show resolved Hide resolved nengo_loihi/tests/test_neurons.py
Show resolved Hide resolved nengo_loihi/neurons.py Outdated
@@ -245,7 +251,7 @@ def test_nengo_dl_neuron_grads(neuron_type, plt, allclose):
else:
# use the derivative of y_med (the smoothed Loihi tuning curve)
dy_ref = np.zeros_like(j)
dy_ref[j > 0] = neuron_type.amplitude / (j[j > 0]*tau_ref1 + 1)**2
dy_ref[j > 0] = neuron_type.amplitude / (j[j > 0]*tau_ref1 + 1) ** 2

This comment has been minimized.

@hunse

hunse Feb 11, 2019

Author Contributor

PEP8 doesn't require space around all binary operators: https://www.python.org/dev/peps/pep-0008/#other-recommendations

I'm really not sure what PEP8 would recommend in this case. Because ** is a "lower priority" operator here, you could argue that it should have these spaces. However, I think because I'm used to cases where it's higher priority (PEP8 clearly prefers x**2 + 1 to x ** 2 + 1), I always do not put spaces around **. It looks really weird to me.

This comment has been minimized.

@hunse

hunse Feb 11, 2019

Author Contributor

Maybe it's partly because ** is already two characters, so there's a bit of "space" there. With the added spaces, it just seems like so much distance, especially because in math a superscript is small and so it feels even closer to whatever it's raising to a power.

This comment has been minimized.

@hunse

hunse Feb 11, 2019

Author Contributor

I mean, I also don't agree with PEP8: I prefer (a + b)*(c - d) to (a+b) * (c-d). So maybe it's correct to have the spaces here, but not in the x**2 + 1 case. I guess I just like things to be fairly consistent, so I'm not always thinking "do I need spaces here or not"? Which is kind of the point of PEP8, but I feel like here it actually makes these decisions harder.

This comment has been minimized.

@tbekolay

tbekolay Feb 12, 2019

Member

Yeah, I prefer putting spaces around all operators so that I don't need to make any decisions about whether some equation will be easier to parse with certain spacing. Then since I do that, it seems more readable to me because I expect there to be spaces around all operators. I would be in favor of enforcing that with flake8, but for now the **2 just seemed weird to me, but I can change it back, no biggie

This comment has been minimized.

@drasmuss

drasmuss Feb 12, 2019

Contributor

Yeah I prefer spaces around all operators for the same reason. It just removes an extra decision, and makes it easy to apply automatic formatting.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Feb 11, 2019

Other than those two coverage points, the changes look good to me.

@hunse hunse force-pushed the keyword-accuracy branch from 39b4b0b to fe10c37 Feb 12, 2019

@hunse hunse force-pushed the keyword-accuracy branch from 9f6d3ee to e2496ac Feb 14, 2019

@tbekolay tbekolay force-pushed the keyword-accuracy branch 2 times, most recently from c0f33d6 to 96c6635 Feb 14, 2019

@hunse hunse force-pushed the keyword-accuracy branch 3 times, most recently from 549b29a to 10a09b5 Feb 20, 2019

@tbekolay tbekolay force-pushed the keyword-accuracy branch 4 times, most recently from c604ff4 to 032b616 Feb 21, 2019

hunse and others added some commits Feb 5, 2019

Make learning deterministic
In the emulator, emulator/interface.py and passthrough.py now
use OrderedDicts where needed, since order is not guaranteed in
normal dictionaries.

On the chip, we now set the seed for all random objects so that
we have control over the on-chip randomness. This commit includes
a test to check that the same simulator seed results in same learning,
and different simulator seeds result in different learning.
Ensure all objects are labelled if possible
Specifically when removing passthrough nodes and splitting,
and when making probe connections in the builder.
Remove unnecessary object superclasses
Python 3 removed the need for these.
Added nengo_dl builders for Loihi neurons
Specifically, we add NengoDL builders for `LoihiLIF` and
`LoihiSpikingRectifiedLinear`.

These builders also include an optional spike noise model to
improve training.

Co-authored-by: Trevor Bekolay <tbekolay@gmail.com>
Rename ClusterException to ClusterError
This fits better with existing Nengo naming scheme.

@tbekolay tbekolay force-pushed the keyword-accuracy branch from 032b616 to 025cdf9 Feb 22, 2019

@tbekolay tbekolay merged commit 025cdf9 into master Feb 22, 2019

0 of 2 checks passed

codecov/patch 98.62% of diff hit (target 100%)
Details
codecov/project 99.27% (-0.07%) compared to 40981e4
Details

@tbekolay tbekolay deleted the keyword-accuracy branch Feb 22, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.