Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do Node filtering before interneurons #95

Merged
merged 1 commit into from Oct 9, 2018
Merged

Do Node filtering before interneurons #95

merged 1 commit into from Oct 9, 2018

Conversation

hunse
Copy link
Collaborator

@hunse hunse commented Sep 27, 2018

This makes them like Ensembles in this respect, and should avoid
problems with different taus on multiple inputs to an ensemble.

Fixes #94.

The other upside to this is that we should avoid the warning that we added for not having a filter on e.g. an input node.

The downside is that it gives the user less control over the filtering on nodes going into ensembles (there's always that 0.005 filter, though the length of that filter is configurable now if you know where to look). However, this same problem has always happened with ensembles to ensembles.

I've based this on #55, since it gives easier access to inter_tau.

@arvoelke
Copy link
Contributor

Thanks for the quick fix! I'll try this branch out when I've got the chance.

there's always that 0.005 filter, though the length of that filter is configurable now if you know where to look. However, this same problem has always happened with ensembles to ensembles.

Could you elaborate? Is this the additional filtering from interneurons? Where is this configured? If using full weights is this still a problem?

@arvoelke
Copy link
Contributor

arvoelke commented Sep 28, 2018

Confirmed that this branch fixes the example in #94. However, if I then try to get rid of the interneurons on the recurrent connection by changing the solver to weights=True, the same exception comes up again:

tau = 0.1

with nengo.Network() as model:
    u = nengo.Node(0)
    x = nengo.Ensemble(100, 1)
    nengo.Connection(u, x, synapse=tau)
    nengo.Connection(x, x, synapse=tau,
                     solver=nengo.solvers.LstsqL2(weights=True))

with loihi.Simulator(model) as sim:
    pass
~/CTN/nengo-loihi/nengo_loihi/simulator.py in __init__(self, network, dt, seed, model, precompute, target)
    182 
    183             # Build the network into the model
--> 184             self.model.build(network)
    185 
    186         self._probe_outputs = self.model.params

~/CTN/nengo-loihi/nengo_loihi/builder.py in build(self, obj, *args, **kwargs)
    129 
    130     def build(self, obj, *args, **kwargs):
--> 131         built = self.builder.build(self, obj, *args, **kwargs)
    132         if self.build_callback is not None:
    133             self.build_callback(obj)

~/CTN/nengo-loihi/nengo_loihi/builder.py in build(cls, model, obj, *args, **kwargs)
    156                 "Cannot build object of type %r" % type(obj).__name__)
    157 
--> 158         return cls.builders[obj_cls](model, obj, *args, **kwargs)
    159 
    160     @classmethod

~/CTN/nengo-loihi/nengo_loihi/builder.py in build_network(model, network)
    206     logger.debug("Network step 3: Building connections")
    207     for conn in network.connections:
--> 208         model.build(conn)
    209 
    210     logger.debug("Network step 4: Building probes")

~/CTN/nengo-loihi/nengo_loihi/builder.py in build(self, obj, *args, **kwargs)
    129 
    130     def build(self, obj, *args, **kwargs):
--> 131         built = self.builder.build(self, obj, *args, **kwargs)
    132         if self.build_callback is not None:
    133             self.build_callback(obj)

~/CTN/nengo-loihi/nengo_loihi/builder.py in build(cls, model, obj, *args, **kwargs)
    156                 "Cannot build object of type %r" % type(obj).__name__)
    157 
--> 158         return cls.builders[obj_cls](model, obj, *args, **kwargs)
    159 
    160     @classmethod

~/CTN/nengo-loihi/nengo_loihi/builder.py in build_connection(model, conn)
    706         mid_cx.add_axons(ax)
    707 
--> 708         post_cx.configure_filter(post_tau, dt=model.dt)
    709 
    710         if conn.learning_rule_type is not None:

~/CTN/nengo-loihi/nengo_loihi/loihi_cx.py in configure_filter(self, tau_s, dt, default)
    182         if self.decayU_set and not np.allclose(decayU, self.decayU):
    183             raise BuildError(
--> 184                 "Cannot change tau_s on already configured neurons")
    185 
    186         self.decayU[:] = decayU

BuildError: Cannot change tau_s on already configured neurons

And FYI, @xchoo's example in #90 breaks on this branch for the same reason.

@hunse
Copy link
Collaborator Author

hunse commented Sep 28, 2018

Yeah, so you're running into one of the constraints of the board. One of the reasons we use interneurons is that they allow individual filtering for each connection.

You can play around with interneurons by setting the parameters in builder.py:Model:

loihi_model = nengo_loihi.builder.Model(dt=0.001)
loihi_model.inter_tau = 0.0  # remove interneuron filter
sim = nengo_loihi.Simulator(network, model=loihi_model)

(I haven't tested that, so there could be a typo)

However, that still won't let you use recurrent weight solvers, unless you set inter_tau to be the filter on your recurrent connection.

You could also do something like this:

with nengo.Network() as model:
    u = nengo.Node(0)
    a = nengo.Ensemble(100, 1)
    x = nengo.Ensemble(100, 1)
    nengo.Connection(u, a, synapse=None)
    nengo.Connection(a, x, synapse=tau, 
                     solver=nengo.solvers.LstsqL2(weights=True))
    nengo.Connection(x, x, synapse=tau,
                     solver=nengo.solvers.LstsqL2(weights=True))

Ultimately, we could set up the inter_tau filter in your weights example so that it's just a default and can be overridden. In that case, though, the input would end up getting filtered twice (on the connection from the node to the interneurons, and from the interneurons to the ensemble), which might be surprising to users.

@tcstewar
Copy link
Collaborator

Yeah, so you're running into one of the constraints of the board.

I don't believe it's a constraint of the board -- it's a constraint of our current implementation. This could be implemented on the board (I believe) by having a two-compartment neuron, each compartment getting a different tau, and just having pure addition between the two compartments.

@hunse hunse mentioned this pull request Sep 28, 2018
@arvoelke
Copy link
Contributor

arvoelke commented Sep 28, 2018

Could the example in my previous post somehow be mapped like:

u -> H(0) -> inter -> H(tau) -> x -> H(0) -----------.
                                ^                    |
                                '- H(tau) <- inter <-'

excuse my weird drawing. But, basically put the taus on the outgoing interneuron filters, and set their incoming filters to zero? Moreover, could then use weights=True on the recurrent connection with the same filter. I think this is what you were saying above? Happy to discuss why I'd like to do this offline, and just consider this kind of customization a feature request from a user. :)

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pushed a commit with some changes, with which this LGTM. Will merge on your OK @hunse!

@tbekolay
Copy link
Member

This change impacts learning networks; test_pes_comm_channel in the 1-dimensional case started to fail on emulator and hardware, but it only failed by a small margin so in the interest of getting release out the door I'm going to adjust the tolerances and merge. However, if anyone notices weird learning behavior we should take a closer look into how this PR changes those learning networks.

@tbekolay
Copy link
Member

@arvoelke Could you make a new issue for the weights=True case? Including your lovely drawing.

@tbekolay
Copy link
Member

Also, in case anyone's trying to debug the learning differences introduced in this branch, for some reason performance is worse for larger numbers of neurons.

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to the learning changes, this also breaks test_node_no_synapse_warning because now, I guess, we are always adding a synapse on connections from nodes? So we instead need to look at a different connection (I guess the one from c.pre to ens in the diff) to see if a synapse has been added? I am using question marks because I don't quite understand, so I think I am not going to merge this right now and do the 0.3.0 release without it. Once we get this figured out we can do a quick 0.3.1 or whatever.

@tcstewar
Copy link
Collaborator

Once we get this figured out we can do a quick 0.3.1 or whatever.

Is there any possibility that we could rush out a quick 0.3.1 with this fix today? Without it, I can't do many demos that use non-default synapses.... I can look into the test_pes_comm_channel and test_node_no_synapse_warning test failures today,

@tbekolay
Copy link
Member

tbekolay commented Sep 30, 2018

I rebased this. Will do a temporary release (v0.4.0.dev0) which I will delete once we merge this and do a real release.

Note that since v0.4.0.dev0 is a "pre-release", do install it you'll need to do

pip install --pre nengo-loihi

and if you already have it installed, you can add the --upgrade --no-deps flags also.

@hunse
Copy link
Collaborator Author

hunse commented Oct 1, 2018

Could the example in my previous post somehow be mapped like:

u -> H(0) -> inter -> H(tau) -> x -> H(0) -----------.
                                ^                    |
                                '- H(tau) <- inter <-'

excuse my weird drawing. But, basically put the taus on the outgoing interneuron filters, and set their incoming filters to zero? Moreover, could then use weights=True on the recurrent connection with the same filter. I think this is what you were saying above?

So something like that would be possible before this PR if you use a weight solver (though of course it wouldn't have the interneurons in the feedback connection).

The problem with doing things that way in general is that it then requires all connections to an ensemble to have the same tau. For example, you couldn't d a normal integrator network like this:

with nengo.Network():
    u = nengo.Node([0])
    e = nengo.Ensemble(10, 1)
    nengo.Connection(u, e)
    nengo.Connection(e, e, synapse=0.1)

unless you set the synapse on the first connection to 0.1 as well (which we don't typically do).

@xchoo
Copy link
Member

xchoo commented Oct 2, 2018

@tbekolay asked me to post this here:
If you try to run the following network without #95, any jobs you run on the loihi hardware will hang at the lakemont_driver call.

with nengo.Network() as model:
    stim = nengo.Node(1)
    ens = nengo.Ensemble(50, 1)

    # Comment out lines below to make it run, leave uncommented to make it hang
    stim_inh = nengo.Node(lambda t: t)
    nengo.Connection(stim, ens.neurons, transform=[[-2]] * ens.n_neurons)

@tbekolay
Copy link
Member

tbekolay commented Oct 2, 2018

I asked Xuan to post it so we could add it as a test. Note that it only hangs if precompute=True.

@hunse
Copy link
Collaborator Author

hunse commented Oct 2, 2018

Ok, I've fixed up the learning stuff. The main issue was that the synapses in the test were different for different probes, such that the pre and post populations were being delayed much more than the stimulus. With that fixed, I could actually reduce the tolerances significantly.

I just removed the warning and the test in question, since with this new setup it's impossible to trip it.

(I haven't added the Xuan test yet.)

@hunse
Copy link
Collaborator Author

hunse commented Oct 2, 2018

So I just tried Xuan's test. I found that it only hangs for me if I use a longer simulation time (if I run for 1s it hangs, but not for 0.1s).

Personally, I think it is something that would be good to look more into. But I don't think it's a good test. We don't really have any idea what it's testing, or if this PR actually fixes it or just shifts the problem (e.g. maybe even longer simulation times still make it hang, or more neurons, or something). I'd prefer to make it an issue and merge this as-is.

@tbekolay
Copy link
Member

tbekolay commented Oct 2, 2018

I just removed the warning and the test in question, since with this new setup it's impossible to trip it.

What does that mean? Setting synapse=None on that connection has no effect? I think that would be surprising to users, but I guess the node outputs are converted to spikes anyway which is a surprise. But yeah, what changed here such that this warning can no longer occur?

@hunse
Copy link
Collaborator Author

hunse commented Oct 2, 2018

Instead of doing node->no synapse->interneurons->synapse->ensemble, we now do node->synapse->interneurons->inter_tau->ensemble. Since only the second part of that pipeline is on the chip, and thus being build by the builder, with this PR all it ever sees is inter_tau synapses coming from Nodes. So they can never be none. The synapse is still done as the user would expect.

@hunse hunse assigned tbekolay and unassigned hunse and tcstewar Oct 2, 2018
@tbekolay tbekolay removed their assignment Oct 9, 2018
This makes Nodes like Ensembles in this respect, and should avoid
problems with different taus on multiple inputs to an ensemble.

The warning for having no synapse on a connection from a Node is
now impossible to trip since Nodes are always off the chip and
the splitter will always use an `inter_tau` synapse. So I removed
the warning and the associated test.

Fixes #94.
@arvoelke
Copy link
Contributor

Note the above code examples involving the setting of inter_tau no longer do the correct thing as of #132. Related discussion is in #97.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants