-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error connection in learning can't use slicing on post #632
Comments
This problem comes down to the fact that we do output slicing within the transform. So in the builder, the output of the That said, since |
I missed the discussion where a modulatory connection needed to be created for learning connections, it's super confusing. Before I updated in the last week I used to be able to directly pass in functions as the error signal, if we could go back to doing that too it would solve this and be way easier to understand! |
You could pass in functions as the error signal? I'm not sure that was supported behaviour. Also, wouldn't that computation be done in Python, not in neurons. Finally, it doesn't solve this problem, since this should still be do-able; it's just a workaround. The |
That's what I end up doing most of the time, but it's definitely a bit ugly. We've also talked about having the error connection target the learned connection, rather than the pre/post population. If we did that, we'd have access to the slice in the post object, and could make things work that way. I think conceptually that also makes a bit more sense. The downside is that that introduces a new possible object type into the connection logic, forcing more "if isinstance(post, ...)" code, which no one likes. |
Yeah, that's the workaround I ended up at too. :( It's also confusing when explaining it that the modulatory connection even On Mon, Jan 26, 2015 at 9:37 PM, Daniel Rasmussen notifications@github.com
|
Yeah, it's definitely weird and un-explainable. In terms of other options, there are a number, and I think pretty much any one would be better than how we do it now. But we had this discussion a while ago, and there wasn't any consensus (I don't think), so nothing ever got changed. |
Ah OK I actually meant to ask where that discussion was in my last post, On Mon, Jan 26, 2015 at 9:45 PM, Eric Hunsberger notifications@github.com
|
#344 I think. Though it seems like that was settled and implemented. I'm not sure if we ever actually addressed the |
Also, that discussion ended with "@drasmuss gets his way again", so I'm blaming Dan for all this! |
The connect to connections issue is #366. For context, Seems like a good thing to chat about at the dev meeting today at noon. If you have any suggestions that you want discussed at the meeting, proposing them this morning would be easiest so we can think about them before the meeting. Option 1: Modulatory connectionsThis is what we have now. with model:
err_conn = nengo.Connection(err, post)
learning_rule = nengo.PES(err_conn)
learned_conn = nengo.Connection(pre. post, learning_rule_type=learning_rule) Option 2: Connect to connection, LR on learned connectionThe connection to the connection provides the error signal, and the learning rule stays on the learned connection. with model:
learned_conn = nengo.Connection(pre, post, learning_rule_type=nengo.PES())
err_conn = nengo.Connection(err, learned_conn)
# Will have to do validation at build-time
# to make sure a connection to learned_conn exists OR with model:
learned_conn = nengo.Connection(pre, post)
err_conn = nengo.Connection(err, learned_conn)
learned_conn.learning_rule_type = nengo.PES(err_conn)
# Can do validation at construction-time, but requires this two-step process
# since err_conn needs learned_conn to exist,
# and therefore the learning rule needs both to exist Option 3: Connect to connection, LR on error connectionThe connection to the connection provides the error signal, and has the learning rule applied to it, even though it's the learned connection that is modified. with model:
learned_conn = nengo.Connection(pre, post)
err_conn = nengo.Connection(err, learned_conn, learning_rule_type=nengo.PES())
# Validation can be done at construction-time;
# connections to a connection _must_ have a learning rule,
# and that learning rule _must_ be set up to modify what it's connected to
# (unless learning_rate is 0, etc.) If you can think of any more options, please add a comment! |
Option 2 makes sense to me. Something that I usually do is specify a function on the error connection and also specify a function to initially approximate for the learned connection, so assuming that could be done like
that makes a lot of sense to me! |
I think I like Option 2 the best as well. Option 3 seems nicer from an implementation perspective, but I think it's less clear semantically (it's not clear which connection is going to be the one that's learning). |
Yep, that'd be correct usage. One issue about Option 2 that I just noticed from Travis's example is that it's not clear how you'd weight multiple error signal sources in this case. So, let me go through the options again in that situation (and, as an additional rub, I'll use the BCM rule on the learned connection also). Option 1: Modulatory connectionswith model:
err_conn1 = nengo.Connection(err1, post, modulatory=True)
err_conn2 = nengo.Connection(err2, post, modulatory=True)
learned_conn = nengo.Connection(pre. post, learning_rule_type=[
nengo.PES(err_conn1), nengo.PES(err_conn2), nengo.BCM()
]) Option 2a: Connect to connection, LR on learned connection, w/ magicwith model:
learned_conn = nengo.Connection(pre, post, learning_rule_type=[
nengo.PES(), nengo.BCM()
])
err_conn1 = nengo.Connection(err1, learned_conn)
err_conn2 = nengo.Connection(err2, learned_conn)
# Note that the learning rule for PES will be set; therefore, to weight err1
# and err2 differently, you would need to set a transform on that connection Option 2b: Connect to connection, LR on learned connection, less magicwith model:
learned_conn = nengo.Connection(pre, post)
err_conn1 = nengo.Connection(err1, learned_conn)
err_conn2 = nengo.Connection(err2, learned_conn)
learned_conn.learning_rule_type = [
nengo.PES(err_conn1), nengo.PES(err_conn2), nengo.BCM()
])
# Here, there are two PES rules which can have different learning rates Option 3: Connect to connection, LR on error connectionwith model:
learned_conn = nengo.Connection(pre, post, learning_rule_type=nengo.BCM())
err_conn1 = nengo.Connection(err1, learned_conn, learning_rule_type=nengo.PES())
err_conn2 = nengo.Connection(err2, learned_conn, learning_rule_type=nengo.PES()) |
Option 1b: Implicit modulatory connectionswith model:
err_conn1 = nengo.Connection(err1) # `post` defaults to None, making connection modulatory
err_conn2 = nengo.Connection(err2, function=np.sin) # can still compute a function
learned_conn = nengo.Connection(pre. post, learning_rule_type=[
nengo.PES(err_conn1), nengo.PES(err_conn2), nengo.BCM()
]) Option 4: Connect to
|
Option 4! I like a lot 👍 Nice symmetry with |
I like 4 as well, seems like that syntax most closely matches what we On 27 January 2015 at 11:40, Trevor Bekolay notifications@github.com
|
A few notes from the dev meeting:
|
- as discussed in #632 - TODO: cannot build error connection until post LearningRule has been built, but cannot build LearningRule until target connection has been built.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. TODO: currently cannot change the learning rule type after accessing `Connection.learning_rule`, since a handle to the old learning rule could be out there somewhere. But this means the user can't build the model and then change the learning rule type to try a different one (like in `examples/learn_unsupervised.ipynb`).
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
Connect the error directly into a learning rule (as discussed in #632), instead of making a modulatory connection. This is cleaner and clearer. Learning rules are now built by their parent connections, and since a connection to a learning rule must added to a network after the connection containing the learning rule, the learning rule will always be built when the builder for the error connection into said learning rule is called. This fixes #632 (slicing `post` in learned connections). The test `test_learning_rules.py:test_pes_decoders_multidimensional` has been updated to test this feature.
I think, error_conn = nengo.Connection(error, post, modulatory=True)
conn.learning_rule_type = [nengo.PES(error_conn), nengo.BCM()] |
Maybe, a dictionary works too. Compare. |
Jan, I think modulatory connections are no longer supported. The method that you have indicated used to be correct, but the interface has now been changed to allow connections to the learning rule object instead of using modulatory connections (see option 4 in the above discussion) |
Yeah, a dictionary works. I tried this out @s72sue; I think you're right that it has something to do with the learning rate for BCM. I used a learning rate of 10% the default, and it worked fine. I.e., I added the line: conn.learning_rule_type['my_bcm'].learning_rate *= 0.1 And it seems to work. Probably this means we should lower the default BCM learning rate. |
@tbekolay , agreed that the default BCM learning rate should be lowered. Also, whats the best way to connect an error (with size_out=1) to a learning rule object with size_in=3 ? It seems like I need to specify a (3x1) transform. |
Hmm, that transform should work I think? Maybe it's transposed, does Another way that should work is a quick function: error_conn1 = nengo.Connection(error_1, conn_1.learning_rule['my_pes'], function=lambda x: [x, x, x]) |
No [[1, 1, 1]] doesn't even compile. Using the function also give the same On Tue, Jul 7, 2015 at 4:12 PM, Trevor Bekolay notifications@github.com
|
This is for sure a bug then. Can you open a separate issue for it? Thanks :D |
Sure, I will create a new issue. On Tue, Jul 7, 2015 at 4:22 PM, Trevor Bekolay notifications@github.com
|
So when you have an error connection for learning, if you try to slice into the post population
gets thrown. Pre-slicing OK, post-slicing not.
You can get around it by creating a dummy population of the appropriate size to connect to, but that makes setting up learning connections even more of a pain. Here's the code that triggers it
Note: this also gets thrown when the post-population is the right size but you slice into it on the error connection:
The text was updated successfully, but these errors were encountered: