New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do Node filtering before interneurons #95
Conversation
Thanks for the quick fix! I'll try this branch out when I've got the chance.
Could you elaborate? Is this the additional filtering from interneurons? Where is this configured? If using full weights is this still a problem? |
Confirmed that this branch fixes the example in #94. However, if I then try to get rid of the interneurons on the recurrent connection by changing the solver to tau = 0.1
with nengo.Network() as model:
u = nengo.Node(0)
x = nengo.Ensemble(100, 1)
nengo.Connection(u, x, synapse=tau)
nengo.Connection(x, x, synapse=tau,
solver=nengo.solvers.LstsqL2(weights=True))
with loihi.Simulator(model) as sim:
pass
And FYI, @xchoo's example in #90 breaks on this branch for the same reason. |
Yeah, so you're running into one of the constraints of the board. One of the reasons we use interneurons is that they allow individual filtering for each connection. You can play around with interneurons by setting the parameters in loihi_model = nengo_loihi.builder.Model(dt=0.001)
loihi_model.inter_tau = 0.0 # remove interneuron filter
sim = nengo_loihi.Simulator(network, model=loihi_model) (I haven't tested that, so there could be a typo) However, that still won't let you use recurrent weight solvers, unless you set You could also do something like this: with nengo.Network() as model:
u = nengo.Node(0)
a = nengo.Ensemble(100, 1)
x = nengo.Ensemble(100, 1)
nengo.Connection(u, a, synapse=None)
nengo.Connection(a, x, synapse=tau,
solver=nengo.solvers.LstsqL2(weights=True))
nengo.Connection(x, x, synapse=tau,
solver=nengo.solvers.LstsqL2(weights=True)) Ultimately, we could set up the |
I don't believe it's a constraint of the board -- it's a constraint of our current implementation. This could be implemented on the board (I believe) by having a two-compartment neuron, each compartment getting a different tau, and just having pure addition between the two compartments. |
Could the example in my previous post somehow be mapped like:
excuse my weird drawing. But, basically put the taus on the outgoing interneuron filters, and set their incoming filters to zero? Moreover, could then use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pushed a commit with some changes, with which this LGTM. Will merge on your OK @hunse!
This change impacts learning networks; |
@arvoelke Could you make a new issue for the |
Also, in case anyone's trying to debug the learning differences introduced in this branch, for some reason performance is worse for larger numbers of neurons. |
9e788bd
to
f007270
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to the learning changes, this also breaks test_node_no_synapse_warning
because now, I guess, we are always adding a synapse on connections from nodes? So we instead need to look at a different connection (I guess the one from c.pre
to ens
in the diff) to see if a synapse has been added? I am using question marks because I don't quite understand, so I think I am not going to merge this right now and do the 0.3.0 release without it. Once we get this figured out we can do a quick 0.3.1 or whatever.
Is there any possibility that we could rush out a quick 0.3.1 with this fix today? Without it, I can't do many demos that use non-default synapses.... I can look into the |
I rebased this. Will do a temporary release ( Note that since pip install --pre nengo-loihi and if you already have it installed, you can add the |
So something like that would be possible before this PR if you use a weight solver (though of course it wouldn't have the interneurons in the feedback connection). The problem with doing things that way in general is that it then requires all connections to an ensemble to have the same with nengo.Network():
u = nengo.Node([0])
e = nengo.Ensemble(10, 1)
nengo.Connection(u, e)
nengo.Connection(e, e, synapse=0.1) unless you set the synapse on the first connection to 0.1 as well (which we don't typically do). |
@tbekolay asked me to post this here:
|
I asked Xuan to post it so we could add it as a test. Note that it only hangs if |
Ok, I've fixed up the learning stuff. The main issue was that the synapses in the test were different for different probes, such that the pre and post populations were being delayed much more than the stimulus. With that fixed, I could actually reduce the tolerances significantly. I just removed the warning and the test in question, since with this new setup it's impossible to trip it. (I haven't added the Xuan test yet.) |
So I just tried Xuan's test. I found that it only hangs for me if I use a longer simulation time (if I run for 1s it hangs, but not for 0.1s). Personally, I think it is something that would be good to look more into. But I don't think it's a good test. We don't really have any idea what it's testing, or if this PR actually fixes it or just shifts the problem (e.g. maybe even longer simulation times still make it hang, or more neurons, or something). I'd prefer to make it an issue and merge this as-is. |
What does that mean? Setting |
Instead of doing node->no synapse->interneurons->synapse->ensemble, we now do node->synapse->interneurons->inter_tau->ensemble. Since only the second part of that pipeline is on the chip, and thus being build by the builder, with this PR all it ever sees is inter_tau synapses coming from Nodes. So they can never be none. The synapse is still done as the user would expect. |
This makes Nodes like Ensembles in this respect, and should avoid problems with different taus on multiple inputs to an ensemble. The warning for having no synapse on a connection from a Node is now impossible to trip since Nodes are always off the chip and the splitter will always use an `inter_tau` synapse. So I removed the warning and the associated test. Fixes #94.
This makes them like Ensembles in this respect, and should avoid
problems with different taus on multiple inputs to an ensemble.
Fixes #94.
The other upside to this is that we should avoid the warning that we added for not having a filter on e.g. an input node.
The downside is that it gives the user less control over the filtering on nodes going into ensembles (there's always that 0.005 filter, though the length of that filter is configurable now if you know where to look). However, this same problem has always happened with ensembles to ensembles.
I've based this on #55, since it gives easier access to
inter_tau
.