-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PES learning rule on decoders #202
Conversation
with OpenCL objects - Also renamed James's opencl stuff
Running nosetests in the project should now * complete successfully * pass >= three tests * skip the rest of them.
Moved Sim* classes to simulator.py, but kept math in nonlinear.py
Mainly this is work in the Ensemble-creation logic.
New make_input using Direct mode makes decoded signal show up a timestep later than before. This is tested directly in test_simulator, so I removed the assert from test_old_api.
Recent refactoring means that the old seed is interpreted differently, resulting in a slightly less-accurate fast probe. The other probes are as accurate as before.
Initial commit of neuron connection class.
Making tests into unit tests makes it easier to import them in nengo_ocl, swap the `Simulator` class attribute, and re-run all the tests with a different simulator. I'd like to do more tests this way, rather than having them be just `test_foo` functions at the module level of test files.
It doesn't yet assert the correctness of the output signal, but it builds the right graph and provides an option to show the converged signals.
This turned out to be relatively easy, since the simulator was effectively allocating signal-like buffers for the input, output, and bias terms of nonlinearities anyway. This change also had the the un-intended but nice consequence of removing the need for separate neuron connections. These are now just encoders whose signal is the `output_signal` of some non-linearity.
More to come, this should probably be included in the simulator_objects constructors, to help with debugging in general.
Filling in support for rate mode, trying to track down bug in handling of lif bias.
There is a bug in the current handling of the neuron bias, trying to find it.
Double-storage of bias_signal.value and bias caused incorrect simulation.
It is now an error to make a filter or transform whose output signal is a constant. model.filter() and model.transform() check this condition, it might be more correct to move this check to the constructors of the respective objects.
Encoder, Decoder, Transform and Filter coefficients have shapes, can potentially change over time (plasticity / adaptation / learning), and are indexed the same way as other signals in the OpenCL codebase. This change standardizes the handling of numbers within the simulator by making all these constants into signals.
Given the just-recently-completed discussion on changing the syntax of the API, what are you thinking of for this syntax? Will it be: with model:
# Create a modulated connection between the 'pre' and 'post' ensembles
nengo.Connection(pre, post, function=lambda x: -1 * np.ones(x.shape),
learning_rule=nengo.PES(error)) or will it be: with model:
# Create a modulated connection between the 'pre' and 'post' ensembles
nengo.LearningConnection(pre, post, function=lambda x: -1 * np.ones(x.shape),
learning_rule=nengo.PES(error)) or even: with model:
# Create a modulated connection between the 'pre' and 'post' ensembles
nengo.PESConnection(pre, post, error, function=lambda x: -1 * np.ones(x.shape)) I think I'd lean towards the middle option. I also don't quite understand the How do people specify an initial weight matrix, or an initial random distribution of weights? (Basically, I'm trying to think of the common use cases people want when they add in a learning rule) |
# Create ensembles | ||
model.make_ensemble('Pre', nengo.LIF(N * D), dimensions=D) | ||
model.make_ensemble('Post', nengo.LIF(N * D), dimensions=D) | ||
error = model.make_ensemble('Error', nengo.LIF(N * D), dimensions=D) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mixing objects and strings... my eyes, my eyes!!!
I was just initializing it to something weird, because otherwise it already does a communication channel. This is just the decoder level learning; in the weight learning case, it will be however we specify weight matrices for non-learning neuron-to-neuron connections (hence @hunse's question about making sure those work). Right now, I'm proposing the top option. I did consider the two possibilities you presented, and in fact, originally used something like the middle one (but not quite). The reason I went with the one I went with is to A) make the use of neruons and learning rules essentially the same, where neurons apply to ensembles and learning rules apply to connections, and B) to reduce duplication of connection code, which should be unchanged except for the additional learning op that modifies the decoder / weight signals. A few more possibilities: with model:
conn = nengo.Connection(pre, post, function=lambda x: -1 * np.ones(x.shape))
pes = nengo.PES(conn, error) with model:
conn = nengo.Connection(pre, post, function=lambda x: -1 * np.ones(x.shape))
pes = nengo.LearningRule(conn, nengo.PES(error)) The advantages of these two is that it is much cleaner when applying multiple learning rules to the same connection. This is one place where the analogy neuron type is to ensembles as learning rule is to connections breaks down, because you can only have one neuron type, but possibly multiple learning rules. We could do this by accepting a list for |
Aren't there a significant variety of proposed and I think the suggestion with On Wed, Nov 13, 2013 at 11:59 AM, Trevor Bekolay
|
I like what @jaberg is suggesting here. It may be very premature to talk about a generic LearningRule class, when we've only got one example of it and we know that different learning rules require very different backend implementations I also really like the idea of initially making a connection exactly how you would normally, then adding a learning rule to it. That could work very well in teaching situations. with model:
conn = nengo.Connection(pre, post, function=lambda x: -1 * np.ones(x.shape))
pes = nengo.PES(conn, error) This is also the first time I've seen a use for keeping the Connection object around for later. |
The |
Ooh, let me update my suggestion a bit: with model:
conn = nengo.Connection(pre, post, function=lambda x: -1 * np.ones(x.shape))
nengo.PESLearning(conn, error) |
@tcstewar : just what I was thinking 👍 |
(calling it PESLearning I think helps make it semi self-documenting) |
Sorry, I didn't mean to introduce a generic with model:
conn = nengo.Connection(pre, post, function=lambda x: -1 * np.ones(x.shape))
model.learn(conn, nengo.PES(error)) or conn = model.connect(pre, post, function=lambda x: -1 * np.ones(x.shape))
model.learn(conn, nengo.PES(error)) |
@tbekolay : we had dreamed at one point about being able to write a learning rule as a Python function in a script, so that the user can easily play around with different learning rules. Do you think this will be possible with the way you have things set up? Could it be done through a |
Nah, this is a WIP so we can talk about learning rules in general here. The problem with that is the need to have a common interface for all learning rules, which is like, impossible. In Java, I split it into learning rules without error signals (unsupervised) and those with error signals (supervised) which seemed like a general split, and you could essentially ask people to provide the |
Hey guys, I'd like to revive this issue. I wrote a learning rule for learning a classifier. I wrote it to do some comparisons with less-neural machine learning algorithms, but I wonder if it might be interesting as a learning rule for the BG? Anyway, I think I'd like to add it to this PR so that if others agree it would be useful in Nengo, then we can coordinate the two of them so they match. First item of business though: rebasing this on master. Has anyone (@tbekolay) done this? I'll start with that. |
I haven't rebased to master yet, so go nuts! Should it be part of the same PR though? Perhaps we could make it another PR but base that one on the PES branch? Or is that too hard to coordinate? |
@tbekolay Sounds good. I'll make the new rule a PR against the PES branch. |
What's the status with learning now? I saw the PR in #232, any movement on this since then? |
I haven't worked on it, but @e2crawfo has, and has it working with Nengo OCL. #232 introduced some things in addition to the learning that makes it incompatible with the current codebase. I suspect that learning rules are going to need something similar to what's being discussed in #285, so perhaps I'll give this a rewrite in that style. |
I've been using the PES rule from the branch cleanup_learning, seems to work fine. |
Gahh, that branch isn't up to date with the current syntax though. |
See #303. |
Now superceded by other learning PRs, so closing. |
This is a first proof of concept of decoder learning in new Nengo. It just uses the PES rule to learn better decoders, but the way it's implemented is how all the other learning rules will have to be implemented, so I wanted to get feedback on the syntax before I make this work on weights and implement hPES, etc.
First off, here it is learning a communication channel super fast. It starts off with decoders that solve the function f(x) = -1, but does the communication channel after that.
Obviously a trivial example, but it's the same math as before so whatev's.
Check out learn_communicationchannel.py and see what you think of the syntax. It's just a new kwarg to the connect call (and therefore to the Connection classes).
Things I would appreciate feedback on:
build_pes
very easy, becausebuild_connect
doesn't know anything about learning rules. But we have to enforce that learning rules build after all connections. Perhaps the same could happen with neural nonlinearities, which would simplify thebuild_ensemble
method?DotInc
or something rather than makingSimPES
. Is that true? It might involve reshaping some signals to do it though, is it worth it?