Conversation
dfb2364 to
f7b1be4
Compare
nengo_loihi/neurons.py
Outdated
| loihi_rates = tf.where(J > self.zero, loihi_rates, self.zeros) | ||
|
|
||
| # --- compute LIF rates (for backward pass) | ||
| if self.config.lif_smoothing: |
There was a problem hiding this comment.
Just brainstorming for the future, but it might be interesting to try a version of loihi_rates where the steps are smoothed a bit, rather than throwing out the steps altogether. Same general idea as SoftLIF, but applying smoothing to all the discontinuities in the LoihiLIF function instead of just the threshold.
There was a problem hiding this comment.
That's an interesting idea. The main reason for the SoftLIF is to smooth the derivative, and here I'm not using the steps at all in the derivative, so the motivation would be a bit different. In practice, though, I think the steps of the tuning curve would get smoothed somewhat in practice due to noise in the network. If you train with noise (which I've been doing for some of the keyword network stuff), then you'll actually get some smoothing of the steps like you're describing, however it will be probabilistic. Doing it deterministically like you suggest would have the benefit of making it more reliable for the network, and therefore allow it to take advantage of it better. You could even (theoretically) use a derivative that takes into account the smoothed steps (maybe that's what you were thinking all along).
The downsides are that you'd have another hyperparameter to tune (you could try to find a good value by estimating the amount of noise on the signals into each layer, and choosing the amount of smoothing to mimic this). The other problem is that I think a simple analytic equation for it (which I think is what we need for TensorFlow) might be difficult to find, keeping in mind that we need it to be relatively easily computable on a GPU. The SoftLIF wasn't trivial to come up with, and that is a considerably simpler problem with only one discontinuity. Applying the same type of SoftLIF smoothing to each of the discontinuities in the stepped Loihi tuning curve would result in an overly-complex function, I think. Maybe there's a way to do it efficiently, though.
There was a problem hiding this comment.
You could even (theoretically) use a derivative that takes into account the smoothed steps (maybe that's what you were thinking all along).
Yeah that's what I was thinking. The nice thing about this implementation is that we're using Loihi-like neurons on the forwards pass, but it would be nice if we could use them on the backwards pass as well. You're right though that I'm not sure what a compact expression for that would be.
1c123d7 to
95a97b8
Compare
|
f7b1be4 to
700301e
Compare
|
A number of tests have started failing for weird reasons. There's also failing for the static and docs tests, because of weird problems with the setup for CI. Not sure if it's because something changed with packages in pip, or how the Travis servers are initialized, or what. |
|
Also, the codecov is bad because the new tests are |
6ae53b7 to
3a914bb
Compare
3a914bb to
fa90c22
Compare
drasmuss
left a comment
There was a problem hiding this comment.
I added a commit removing the intermediate RCNoiseBuilder class. My thinking is that this keeps things a bit simpler, and we can always add more complexity to the class hierarchy if it becomes necessary in the future (although I suspect that this part of the code will be a relatively edge use case, and if it did become significantly more complicated we'd probably want to factor it out into nengo-dl or nengo-extras or something).
Other than that, just two minor comments/questions.
|
All LGTM now! |
6e49c11 to
f59ebad
Compare
676429f to
e10d314
Compare
nengo_loihi/tests/test_neurons.py
Outdated
| # use the derivative of y_med (the smoothed Loihi tuning curve) | ||
| dy_ref = np.zeros_like(j) | ||
| dy_ref[j > 0] = neuron_type.amplitude / (j[j > 0]*tau_ref1 + 1)**2 | ||
| dy_ref[j > 0] = neuron_type.amplitude / (j[j > 0]*tau_ref1 + 1) ** 2 |
There was a problem hiding this comment.
PEP8 doesn't require space around all binary operators: https://www.python.org/dev/peps/pep-0008/#other-recommendations
I'm really not sure what PEP8 would recommend in this case. Because ** is a "lower priority" operator here, you could argue that it should have these spaces. However, I think because I'm used to cases where it's higher priority (PEP8 clearly prefers x**2 + 1 to x ** 2 + 1), I always do not put spaces around **. It looks really weird to me.
There was a problem hiding this comment.
Maybe it's partly because ** is already two characters, so there's a bit of "space" there. With the added spaces, it just seems like so much distance, especially because in math a superscript is small and so it feels even closer to whatever it's raising to a power.
There was a problem hiding this comment.
I mean, I also don't agree with PEP8: I prefer (a + b)*(c - d) to (a+b) * (c-d). So maybe it's correct to have the spaces here, but not in the x**2 + 1 case. I guess I just like things to be fairly consistent, so I'm not always thinking "do I need spaces here or not"? Which is kind of the point of PEP8, but I feel like here it actually makes these decisions harder.
There was a problem hiding this comment.
Yeah, I prefer putting spaces around all operators so that I don't need to make any decisions about whether some equation will be easier to parse with certain spacing. Then since I do that, it seems more readable to me because I expect there to be spaces around all operators. I would be in favor of enforcing that with flake8, but for now the **2 just seemed weird to me, but I can change it back, no biggie
There was a problem hiding this comment.
Yeah I prefer spaces around all operators for the same reason. It just removes an extra decision, and makes it easy to apply automatic formatting.
|
Other than those two coverage points, the changes look good to me. |
39b4b0b to
fe10c37
Compare
9f6d3ee to
e2496ac
Compare
c0f33d6 to
96c6635
Compare
549b29a to
10a09b5
Compare
c604ff4 to
032b616
Compare
In the emulator, emulator/interface.py and passthrough.py now use OrderedDicts where needed, since order is not guaranteed in normal dictionaries. On the chip, we now set the seed for all random objects so that we have control over the on-chip randomness. This commit includes a test to check that the same simulator seed results in same learning, and different simulator seeds result in different learning.
Specifically when removing passthrough nodes and splitting, and when making probe connections in the builder.
Python 3 removed the need for these.
This fits better with existing Nengo naming scheme.
Specifically, we add NengoDL builders for `LoihiLIF` and `LoihiSpikingRectifiedLinear`. These builders also include an optional spike noise model to improve training. Co-authored-by: Trevor Bekolay <tbekolay@gmail.com>
032b616 to
025cdf9
Compare
This adds nengo_dl TensorFlow builders for the LoihiLIF and LoihiSpikingRectifiedLinear neuron types. This allows nengo_dl networks to train with tuning curves like those of the chip neurons, allowing for more accurate conversion of ANNs for the chip.
Based off #133