Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added keras-to-loihi IPython notebook example #281

Merged
merged 2 commits into from Apr 20, 2020
Merged

Conversation

studywolf
Copy link
Collaborator

@studywolf studywolf commented Apr 2, 2020

Would appreciate comments! At this point I would wait for nengo/nengo-dl#146 to be addressed and then finish up the last block, show it actually running on loihi and call it good. Thoughts?

@hunse
Copy link
Collaborator

@hunse hunse commented Apr 9, 2020

I just took a look. I think this will be a good balance to the new CIFAR-10 notebook, showing another way of getting a deep network on Loihi. Here's some feedback:

  • Now that the CIFAR-10 example is added to this repo #282, we should probably mention it at the start of this notebook, a) to make readers aware that it exists, and b) to highlight the difference between that approach (entirely in Nengo/NengoDL) with this approach (coming from Keras using the NengoDL converter)
  • I don't think I would include the "Dense" input layer. This isn't a method that I've ever seen used, and I don't think it's practical. Since the input layer is being run entirely off-chip, this results in a lot of off-chip computation.
  • In general, I think the input layers receive a bit too much emphasis, which detracts from some of the more practical considerations when putting a model on Loihi (e.g. checking/tuning firing rates). I'm not sure if you need to compare multiple types of input layers at all, since that's typically not a key design decision (i.e. everybody could use the conv layer method, and it would work fine). If you do want to make the comparison, I would do it in a separate section (probably later in the notebook once you've gone through the model end-to-end, but you could do it earlier if you want); I would not keep that comparison going throughout the whole notebook.
  • When you're getting it running on Loihi, you'll need to set the block_shape for your ensembles to tell NengoLoihi how to split them to fit on Loihi cores. I think it's fine to not go into detail about that, though, since that's covered in more depth in the CIFAR-10 example and in the docs.
  • The "Neural Activities" plots don't really show a whole lot. The main thing they do is show whether some neurons are spiking more than once per timestep. That's important, but I'm wondering if there's a better way to show that as well as other info about firing rates in the model. Maybe something like computing the firing rates of each neuron on each example, and then showing a histogram of those rates.
  • I think it would be good to talk more about how to choose scale_firing_rates. With a better method of measuring/showing firing rates in the model, I think this will be pretty straightforward (i.e. you could compare three situations with it too low, too high, and just right 👱‍♀️).
  • When plotting the Loihi neuron tuning curves, I wouldn't plot so far in the negative direction (i.e. make the min just below 0). I'd also make the max a bit higher, particularly for the ReLU to show how off Loihi, it can increase above 1000, but on Loihi, it can't.

@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 13, 2020

Thanks for the great feedback!

  • Now that the CIFAR-10 example is added to this repo #282, we should probably mention it at the start of this notebook, a) to make readers aware that it exists, and b) to highlight the difference between that approach (entirely in Nengo/NengoDL) with this approach (coming from Keras using the NengoDL converter)

done!

  • I don't think I would include the "Dense" input layer. This isn't a method that I've ever seen used, and I don't think it's practical. Since the input layer is being run entirely off-chip, this results in a lot of off-chip computation.

removed!

  • In general, I think the input layers receive a bit too much emphasis, which detracts from some of the more practical considerations when putting a model on Loihi (e.g. checking/tuning firing rates). I'm not sure if you need to compare multiple types of input layers at all, since that's typically not a key design decision (i.e. everybody could use the conv layer method, and it would work fine). If you do want to make the comparison, I would do it in a separate section (probably later in the notebook once you've gone through the model end-to-end, but you could do it earlier if you want); I would not keep that comparison going throughout the whole notebook.

removed the other input layers, added more discussion about tuning firing rates 👍

  • When you're getting it running on Loihi, you'll need to set the block_shape for your ensembles to tell NengoLoihi how to split them to fit on Loihi cores. I think it's fine to not go into detail about that, though, since that's covered in more depth in the CIFAR-10 example and in the docs.

still have to do this, blocked before got to actually running it on Loihi

  • The "Neural Activities" plots don't really show a whole lot. The main thing they do is show whether some neurons are spiking more than once per timestep. That's important, but I'm wondering if there's a better way to show that as well as other info about firing rates in the model. Maybe something like computing the firing rates of each neuron on each example, and then showing a histogram of those rates.

A histogram of the firing rates is a good idea! Not added in today's update but will do.

  • I think it would be good to talk more about how to choose scale_firing_rates. With a better method of measuring/showing firing rates in the model, I think this will be pretty straightforward (i.e. you could compare three situations with it too low, too high, and just right blond_haired_woman).

Added a bit more discussion, this is also discussed in the Keras->SNN notebook.

  • When plotting the Loihi neuron tuning curves, I wouldn't plot so far in the negative direction (i.e. make the min just below 0). I'd also make the max a bit higher, particularly for the ReLU to show how off Loihi, it can increase above 1000, but on Loihi, it can't.

changed!

The rework adds in discussion for tuning different layers with different scale_firing_rates. I was blocked though on converting to Loihi with a sparse connections from off to on chip not supported.

@studywolf studywolf changed the title WIP: Added keras-to-loihi IPython notebook example Added keras-to-loihi IPython notebook example Apr 15, 2020
@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 15, 2020

Reworked the example with Eric's comments, it's longer, but goes through all of the steps. I think we probably can't address the block_shape stuff enough times, it's mostly referencing other material, but there's a quick discussion in there about it.

I left the spike plots as they were, because I feel like it's useful to see plotted against time and conveys the information pretty well, if not ideally.

@hunse hunse force-pushed the keras-to-loihi_example branch 2 times, most recently from 4b4cbb1 to 73af096 Compare Apr 15, 2020
@hunse
Copy link
Collaborator

@hunse hunse commented Apr 16, 2020

I've made some changes, just code cleanup and wording changes for the most part. I also made a couple of performance changes (smaller batch size and fewer test examples) just to make it run a bit more quickly, particularly on weaker machines.

One section I'm a bit worried about is the "Training with Loihi Neurons" section. I recommend training with firing rate regularization when using with Loihi neurons, since it keeps the optimization honest. Otherwise, the optimization could hypothetically shift to having low firing rates for all layers, since this would avoid the Loihi neuron discretization error, but wouldn't work well on the chip because of too few spikes. In practice, I have no idea how likely this is to happen, or what might affect it. Obviously in your case, it's not happening, and the final firing rates are still OK. Maybe we should just mention that full firing rate regularization is possible/recommended, and the CIFAR-10 notebook goes into this in more detail.

The only other thing I want to check is running on an actual Loihi. Have you tried this? Looking at the code, I think that having probes on all the hidden layers will be problematic on the actual Loihi. It might be possible to do it with precompute=False, but really I'm not sure if the output plots for the hidden layers are terribly valuable there (we don't discuss them at all, and they're all just hard-to-read lines between 0 and 1). So I think we should just get rid of them. I currently can't log in to the INRC server to test, though.

@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 17, 2020

I like the idea of mentioning that the regularization is the better way to going about it and directing people to the CIFAR-10 notebook, in that it shows people another way that they can go about things with slightly less overhead. Although I'd also be fine changing over to the regularization.

I have not tried running it on the actual Loihi! I agree the probes for the hidden layer are not helpful in the final plots so we can just take those out. I've added a commit that does these things! How's that look?

@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 17, 2020

testing on INRC now

@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 17, 2020

hmm, i'm getting

Exception: All the Spike Counters are being Used : Cannot create a new one. Spike Counters are limited by the hardware. Use activity counters instead if more compartments need to be probed for spikes. See tutorial_22_activity_probe.

can you take a look when you get a minute?

@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 17, 2020

Tested and running on the Loihi using INRC! I did have to add this line

os.environ["SLURM"] = "1"

not sure if that should be included or what the practice is for these notebooks.

I also updated the text, added a couple of lines about the additional changes to the network and explaining them.

@hunse hunse force-pushed the keras-to-loihi_example branch from db18a92 to 497910b Compare Apr 17, 2020
@hunse
Copy link
Collaborator

@hunse hunse commented Apr 17, 2020

Looks good! I made a few more minor changes to wording, and I think it's good to go! I also successfully ran it on Loihi.

I don't think we should put the SLURM setting in the notebook, since that's quite platform-dependent. Those who have their own board might not use SLURM, and those who use the INRC server should know how to get things going on SLURM.

@studywolf
Copy link
Collaborator Author

@studywolf studywolf commented Apr 17, 2020

Awesome! 👍 👍

@hunse hunse force-pushed the keras-to-loihi_example branch 2 times, most recently from 0ac3714 to 8d0022d Compare Apr 20, 2020
hunse
hunse approved these changes Apr 20, 2020
Copy link
Collaborator

@hunse hunse left a comment

I added an additional commit to make sure the nahuku32 board doesn't hang when we use it in tests. It appears that the problem is with one of the two nahuku32 boards only, so our tests were sometimes hanging depending on which board we got.

nengo_loihi/tests/test_conv.py Outdated Show resolved Hide resolved
@hunse hunse force-pushed the keras-to-loihi_example branch from 8d0022d to e046da9 Compare Apr 20, 2020
studywolf and others added 2 commits Apr 20, 2020
Co-authored-by: Eric Hunsberger <eric.hunsberger@appliedbrainresearch.com>
A current problem with the nahuku32 boards requires the
`--skip-power=1` option for them to boot successfully.
@hunse hunse force-pushed the keras-to-loihi_example branch from e046da9 to 47a0633 Compare Apr 20, 2020
@hunse hunse merged commit 47a0633 into master Apr 20, 2020
2 of 3 checks passed
@hunse hunse deleted the keras-to-loihi_example branch Apr 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants