New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interneuron improvements #132

Merged
merged 1 commit into from Jan 20, 2019

Conversation

Projects
None yet
3 participants
@hunse
Copy link
Contributor

hunse commented Oct 25, 2018

This helps to address #129.

Rather than using homogeneous interneurons with noise, this uses heterogeneous interneurons. However, they are designed so that they all use the same decoder, which means that the size of encoders in target populations can remain small (otherwise it would scale with the number of interneurons).

Based off #124.

@hunse hunse referenced this pull request Nov 1, 2018

Merged

Integrator accuracy #124

@tbekolay tbekolay force-pushed the master branch from 701f997 to 1fe237c Nov 11, 2018

@hunse hunse force-pushed the add-inputs branch from 4a538fa to ea4fa48 Dec 6, 2018

@hunse hunse referenced this pull request Dec 21, 2018

Merged

Refactor files, emulator, api #159

7 of 7 tasks complete

@hunse hunse referenced this pull request Jan 13, 2019

Open

Revisit xfailed tests #166

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 14, 2019

I don't want this to derail this PR, but I did want to note that the first time someone mentioned "interneurons" to me I was somewhat confused because the fact that they are used for NEF decoding wasn't clear in the name, so I assumed that they were inhibitory interneurons as we've used in some other contexts (e.g. respecting Dale's principle). Yes I know that not all biological interneurons are inhibitory, but I also haven't seen the term interneuron used before in the sense of decoding values. Since this PR adds several interneuron-related elements to the API, it might be a good time to explicitly decide if we're ok with the name interneuron or if we can come up with something more clear. A few alternative names come to mind:

  • Decode neurons / interneurons
  • NEF neurons / interneurons
  • Relay neurons / interneurons (this is also a neuroscience term)

My vote would be for decode neurons as it's more clear what their purpose is (though obviously it's never going to be perfectly clear with one word), and isn't a term already used in neuroscience.

@hunse hunse force-pushed the add-inputs branch from 7a3b89d to 232890f Jan 14, 2019

@tcstewar

This comment has been minimized.

Copy link
Contributor

tcstewar commented Jan 14, 2019

My vote would be for decode neurons as it's more clear what their purpose is (though obviously it's never going to be perfectly clear with one word), and isn't a term already used in neuroscience.

+1 to that suggestion!

@tbekolay
Copy link
Member

tbekolay left a comment

Made a bunch of inline comments, mostly pretty nitpicky, but some have more substance. It's a fair bit, so I'd be happy to make the changes as it's fresh in my mind if you have other things you're working on @hunse.

Also, we pretty much have consensus on switching to "decode neurons" though we should make sure there are no dissenting opinions, so I would do that in this PR, and I think we should remove the #148 and #151 commits to see if it causes test failures (and prioritize them if it does).

Show resolved Hide resolved nengo_loihi/builder.py Outdated
self._inter_rate = None

# number of inter neurons
self.n = n

This comment has been minimized.

@tbekolay

tbekolay Jan 15, 2019

Member

Personally I would prefer this be n_neurons, it's impossible to grep through files for n if I want to know where this is used. Also means you wouldn't need an explanatory comment, as it would be clear from the name.

This comment has been minimized.

@hunse

hunse Jan 15, 2019

Author Contributor

So n isn't actually the total number of neurons. It's the number of neurons per dimension per on/off pair (at least in all the interneuron types we have so far). This is because all our interneurons use a system of on/off pairs and separate interneurons for each dimension. Theoretically, you could have interneurons where it's just one big group of neurons, and each responds to multiple dimensions (just like an ensemble with multiple dimensions).

So we could call it neurons_per_dimension, but even that wouldn't be quite accurate because the actual neurons per dimension would be twice that (because we always have on/off pairs).

This comment has been minimized.

@tbekolay

tbekolay Jan 16, 2019

Member

Maybe n_pairs or pairs_per_dimension? I'm not okay with the one-character n.

This comment has been minimized.

@tbekolay

tbekolay Jan 16, 2019

Member

I'd also be okay with size I think

Show resolved Hide resolved nengo_loihi/builder.py Outdated
Show resolved Hide resolved nengo_loihi/builder.py Outdated
Show resolved Hide resolved nengo_loihi/builder.py Outdated
@@ -364,16 +585,18 @@ def build_ensemble(model, ens):
bias=bias)


def build_interencoders(model, ens):
"""Build encoders accepting on/off interneuron input."""
def build_inter_encoders(model, ens, kind='inter_encoders'):

This comment has been minimized.

@tbekolay

tbekolay Jan 15, 2019

Member

Why not pass in the Interneurons instance here rather than a string? With the class hierarchy, it should be fine to pass in any instance.

This comment has been minimized.

@hunse

hunse Jan 15, 2019

Author Contributor

Because the string also gets used for naming the synapse, which is then used to find the synapse by name when we're making connections.

Show resolved Hide resolved nengo_loihi/splitter.py Outdated
Show resolved Hide resolved nengo_loihi/tests/test_interneurons.py Outdated
target = sim.data[p_stim_a] + sim.data[p_stim_b]

error = np.abs(sim.data[p_c][tmask] - target[tmask]).mean()
print(error)

This comment has been minimized.

@tbekolay

tbekolay Jan 15, 2019

Member

I'm guessing this was here to get good tolerance values initially? I would probably remove it in lieu of maybe something like plt.title("error=%g" % error)

This comment has been minimized.

@hunse

hunse Jan 15, 2019

Author Contributor

Partly to get good initial values, but also because this is somewhat a benchmark in addition to a test. I use this test to make changes to different interneuron classes and see how it affects the performance.

My plan was to use the logger fixture once the PR where I add that gets merged, but I still have some problems with the logger (i.e. it would be nice to be able to just print everything that it logs to the screen, with a command line option like --log-to-screen).

This comment has been minimized.

@tbekolay

tbekolay Jan 16, 2019

Member

Yeah the plan is to get rid of logger because pytest has its own logging stuff that works better than what we had (minus logging to individual files, but we rarely used that).

nengo.Connection(b, c)

out_synapse = nengo.Alpha(0.03)
stim_synapse = out_synapse.combine(

This comment has been minimized.

@tbekolay

tbekolay Jan 15, 2019

Member

This feels weird. Why is stim_synapse like this?

This comment has been minimized.

@hunse

hunse Jan 15, 2019

Author Contributor

Each time there's a connection in the network, there's two Lowpass(0.005) filters applied (one from the ensemble and one from the interneurons). Two identical lowpass filters make an alpha filter, and I combine one of them for each connection. This replicates the filtering and delay that happens in the network, so that the stim signal and recorded output signal are comparable.

@hunse hunse force-pushed the add-inputs branch from 232890f to fb08518 Jan 15, 2019

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 15, 2019

Ok, I've made almost all the requested changes (except one, where removing a function precluded doing it).

Two things left to do are make the name change to DecoderNeurons, and test this on the chip (some of the tolerances on the new tests might have to be adjusted).

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 16, 2019

There are two changes to lines that aren't covered currently.

One is in the builder function for probes: if isinstance(probe.target, Node). I don't think this can ever be true, because all nodes are off-chip and so whenever we probe them the probe will also be off-chip.

The other is a BuildError in the splitter, basically checking that the node_neurons variable is not none. I think it makes sense to allow node_neurons to be None in the splitter in general, since if you were using the splitter to split something that doesn't have nodes, it wouldn't need this. However, we're always passing something for node_neurons when we actually use the splitter in Simulator, so this line will never get hit.

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 16, 2019

I pushed a fixup (8acfa65) to change the class hierarchy around a little bit (simplified to remove EqualDecoderDecodeNeurons, as it seemed to only be there in order to have the Preset classes not have a get_ensemble method), can you take a look and let me know if I missed / misinterpreted anything @hunse? Tests are all still passing on emulator. I'll add tests for not currently covered lines later today.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 16, 2019

The problem with basing the Preset classes off of OnOffDecodeNeurons is now they have a get_ensemble method that won't work for them. So they'd have to re-implement get_ensemble with a NotImplementedError, which is a bit strange to me. (Actually, I guess in the case of those specific Preset classes, you would get a NotImplementedError because get_ensemble checks that n == 1. So maybe it's okay for now? As long as nobody creates a new Preset class with n == 1 and tries to use the get_ensemble method.)

I think it would be possible to re-write all of them to use the same get_ensemble and get_cx methods, but I'm not sure if we want to worry about that at this point.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 16, 2019

Ok, it was actually pretty easy to have them use the same get_cx function. I haven't gotten the get_ensemble to work yet; I'm going to look for a few more minutes and see.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 16, 2019

I've fixed up get_ensemble, too. It still doesn't work for pairs_per_dim > 1, for the reason noted in the code, but if we resolve that issue then I think it should work.

I'm happy with this, now!

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 16, 2019

I'm happy with this, now!

Ditto, the classes look great. I'll get started testing the diff lines not yet covered.

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 17, 2019

Added in 7d8e879. Will rebase once all CI passes and merge if there are no issues in the rebase.

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 17, 2019

I think I would prefer %0.3g or %0.2e

Sure, I'll change to %0.3g

@tbekolay tbekolay force-pushed the add-inputs branch 3 times, most recently from 0f71682 to 06d7b96 Jan 17, 2019

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 17, 2019

The rebase was not difficult, but did require some changes, and I wrote a changelog entry and a consolidated commit message. Will wait for @hunse to sign off on the changelog entry and commit message before merging.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 17, 2019

The changelog entry looks good. The only error in the commit message is that NoisyDecodeNeurons was actually the default behaviour before this commit for Ensemble->Ensemble connections. The OnOffDecodeNeurons are only used to generate input spikes, both before and after this commit.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 17, 2019

I added a docstring for noise_exp so that it's clear that it's a base-10 float.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 17, 2019

There's one test that I've found failing on the chip so far, which is test_communication.test_neurons2node. The idea of the test is that you've got a sine wave input to an ensemble, whose neurons then connect out to a node which records the neural data. It's basically to test that a neuron->node connection works fine.

What's actually asserted, though, is that the neurons with positive encoders only fire for the positive part of the sine wave, and neurons with negative encoders only fire for the negative part. It seems like in this branch, on the chip, there are two stray spikes that happen in positive-encoder neurons during the negative part of the sine wave. Otherwise, the behaviour is identical between emulator and chip, and exactly as expected.

Really, I don't think this is a big cause for concern, and we could just have slightly looser tolerances on the test, since everything is behaving pretty-much as it should. But, I am annoyed that the emulator is not matching the chip perfectly here, and I'd like to look a bit more into that.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 17, 2019

Ok, so what seems to be happening here is that when we do spike probes with precompute=False (i.e. in snips), we probe the voltage and look for voltages that equal the refractory period * 128 (since Loihi uses the voltage to count the refractory period after a spike). In this test, with the new interneurons, it seems like two voltage values have just happened to hit this magic number (384 in this case) and we interpret that as a spike in our probe. Thus, it looks like the neuron has spiked, even though it hasn't.

If I run the test with precompute=True then it passes, because the Loihi spike probes do not have this problem.

So this is definitely unrelated to this PR, and I don't think we should try to fix it here. I would suggest adjusting the tolerances of the test to allow these couple of stray spikes, and make an issue for this. Or we could keep the test the same and just switch to precompute=True; maybe that's better, and it would speed up our test suite.

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 18, 2019

I made issue #170 to track this problem, and changed the test to use precompute=True for now. Hopefully everything runs on the chip now!

@hunse

This comment has been minimized.

Copy link
Contributor Author

hunse commented Jan 18, 2019

I ran all the nengo_loihi tests on the chip, and they all pass for me.

I also ran the nengo tests on the chip, and they also appear to pass. I just want to double-check that it's only 70 tests that we expect to pass. (I got 70 passed, 508 skipped, 14 deselected, 198 xfailed on our server, and 70 passed, 645 skipped, 14 deselected, 198 xfailed on the Intel server, with no failures on either. I'm not sure why there's the discrepancy in numbers skipped; they're both using the same Nengo version, though one does look like it's from pip and the other from git.)

@tbekolay

This comment has been minimized.

Copy link
Member

tbekolay commented Jan 20, 2019

I'm not sure why there's the discrepancy in numbers skipped; they're both using the same Nengo version, though one does look like it's from pip and the other from git.

I think if you install from pip, you don't get the examples, so that might explain why it has fewer tests. 70 passing tests sounds about right.

All else sounds good, I'll do a quick double check of this then merge.

@tbekolay tbekolay force-pushed the add-inputs branch from df2196c to eecc944 Jan 20, 2019

Refactor interneurons, now called decode neurons
The biggest change is to be able to customize how many decode
neurons are used and how they are organized.
Additionally, the name "interneurons" was changed to "decode
neurons" to avoid confusion with the concept of inhibitory
interneurons in neuroscience.

This commit also removes sharing encoders between node inputs
and ensemble inputs. This greatly simplifies things, because these
no longer have to be identical.

Several decode neuron implementations are provided:
- OnOffDecodeNeurons uses identical pairs of on/off neurons for
  each dimension. This is the default for generating input spikes
  (for nodes).
- NoisyDecodeNeurons uses pairs of on/off neurons for each dimension
  with injected noise so that each on/off pair is different.
  This was the default behavior prior to this commit for decoded
  connections between two ensembles.
- Preset5DecodeNeurons uses five heterogeneous on/off pairs with
  pre-set values for each dimension.
- Preset10DecodeNeurons uses ten heterogeneous on/off pairs with
  pre-set values for each dimension. This is now the default behavior
  for decoded connections between two ensembles.

This commit uncovered a bug in test_communication.test_neurons2node
in which a magic number on the chip was being treated as a spike.
For now, we set `precompute=True` to avoid the bug.

Partially addresses #129.

@tbekolay tbekolay force-pushed the add-inputs branch from eecc944 to 6e50632 Jan 20, 2019

@tbekolay tbekolay merged commit 6e50632 into master Jan 20, 2019

@tbekolay tbekolay deleted the add-inputs branch Jan 20, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment