Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure defaults for optimal Loihi performance #126

Merged
merged 3 commits into from Jun 11, 2019

Conversation

@hunse
Copy link
Collaborator

commented Oct 15, 2018

In #73, we started talking about whether the current defaults are actually the best. I chose them pretty arbitrarily, so it's unlikely that they are.

This PR is a place for us to collect changes to the defaults that improve performance across a variety of models. With many of the Nengo benchmarks now becoming available, those could be a good way to test potential changes.

To start, I'm correcting something that was a mistake on my part. Since intercepts happen after encoders, we actually just need to lower the upper bound on intercepts (away from one) to avoid high gains. The lower bound can (and I think should) stay at -1. I'd like to test this change on some benchmarks, though, to see if it's actually better.

EDIT: I just went and rebased this to the integrator-accuracy branch (#124), since that makes some pretty significant changes to how we choose weights that could have an effect on accuracy measurements.

@hunse hunse force-pushed the defaults branch from d54169f to 46b2ebe Oct 15, 2018

@hunse

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 15, 2018

This is also related to #83, since how we scale weights will determine how sensitive we are to large weights/gains, which is the reason for limiting intercepts.

@tbekolay

This comment has been minimized.

Copy link
Member

commented May 22, 2019

After rebasing, the only commit not yet incorporated in master is bd5df97 which I believe is an improvement over the current state of master.

@hunse hunse force-pushed the defaults branch from bd5df97 to 6079eeb May 22, 2019

@drasmuss drasmuss force-pushed the defaults branch from 6079eeb to e9ec8b6 Jun 3, 2019

@drasmuss

This comment has been minimized.

Copy link
Member

commented Jun 3, 2019

This causes one of the nengo core tests to fail (see https://travis-ci.com/nengo/nengo-loihi/jobs/205047436#L3674).

@hunse

This comment has been minimized.

Copy link
Collaborator Author

commented Jun 3, 2019

Hmm, that's strange. That test is making sure that inhibition works correctly to silence neurons.

In core Nengo, you get the output you'd expect. The bottom plot is showing neuron voltages.
test_connection.test_node_to_neurons.pdf

Even on master of this repository, surprisingly some neurons are firing, however not enough to break the tolerances and fail the test.
test_connection.test_node_to_neurons.pdf

This branch seems to make things worse, with more neurons firing when they should be silent.
test_connection.test_node_to_neurons.pdf

So something is definitely going on here that shouldn't be, but the problem seems to go beyond this branch.

@hunse

This comment has been minimized.

Copy link
Collaborator Author

commented Jun 3, 2019

Figured it out. When we do node->neuron connections, we do the transform on the host (before we turn the value into spikes). This works fine for transforms that keep the output in the range [-1, 1], but for ones like in this test (the input is 1 and the transform is -5), it doesn't work so well. When neurons have larger biases (as in this PR), there's more bias that the negative input has to overcome, so it does worse.

Not sure what the best fix is.

@hunse

This comment has been minimized.

Copy link
Collaborator Author

commented Jun 5, 2019

Ok, we're now putting weights on the chip when doing any connection into neurons. I think this has a number of advantages (see the commit message 32a5873). It does make the mapping behaviour slightly more complicated, since now there are differences depending on both the pre and post objects.

@tbekolay tbekolay force-pushed the defaults branch from 4f26653 to a400d03 Jun 10, 2019

@tbekolay
Copy link
Member

left a comment

LGTM, merging once CI finishes.

@tbekolay tbekolay force-pushed the defaults branch 4 times, most recently from 23d7dd5 to 96f056f Jun 10, 2019

hunse and others added some commits Oct 15, 2018

Decrease lower bound on default intercept
Since intercepts are applied after the encoders, it's only the upper
bound that needs to be less than one to avoid high gains. The lower
bound should go to -1.

This changed the intercepts for test_nengo_comm_channel_compare,
causing it to fail. The test was fragile because it did not apply
sufficient filtering to the output. The new test filters the outputs
more and can thus have tigher tolerances (passing on 10 seeds).
Apply neuron input transform on-chip
The transform is now applied on-chip When doing a connection from
a host object to on-chip neurons. This helps avoid scaling issues
(e.g. if the transform has output outside [-1, 1]), and avoids
large off-chip transforms (since such transforms are often large).
Mark sparse tests unsupported
And raise decent errors as soon as possible.

Co-authored-by: Trevor Bekolay <tbekolay@gmail.com>

@tbekolay tbekolay force-pushed the defaults branch from 96f056f to bc2da42 Jun 11, 2019

@tbekolay tbekolay merged commit bc2da42 into master Jun 11, 2019

3 checks passed

Travis CI - Branch Build Passed
Details
codecov/patch 100% of diff hit (target 100%)
Details
codecov/project 99.6% (+<.01%) compared to 598e0f5
Details

@tbekolay tbekolay deleted the defaults branch Jun 11, 2019

@arvoelke arvoelke referenced this pull request Aug 7, 2019
5 of 6 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
3 participants
You can’t perform that action at this time.