Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error connection in learning can't use slicing on post #632

Closed
studywolf opened this issue Jan 26, 2015 · 27 comments
Closed

Error connection in learning can't use slicing on post #632

studywolf opened this issue Jan 26, 2015 · 27 comments
Labels
Milestone

Comments

@studywolf
Copy link
Collaborator

So when you have an error connection for learning, if you try to slice into the post population

File "/home/tdewolf/Dropbox/code/nengo/nengo/builder/operator.py", line 234, in make_step
    assert da in [1, dy] and dx in [1, dy] and max(da, dx) == dy
AssertionError

gets thrown. Pre-slicing OK, post-slicing not.
You can get around it by creating a dummy population of the appropriate size to connect to, but that makes setting up learning connections even more of a pain. Here's the code that triggers it

import numpy as np
import nengo
from nengo.networks import EnsembleArray
nengo.log()

model = nengo.Network()

with model: 

    dim = 6
    state = nengo.Node(output=[0,]*dim)
    learning_signal = nengo.Node(output=[1,]*(dim/2))
    population1 = nengo.Ensemble(100, dim)
    population2 = nengo.Ensemble(100, dim)

    nengo.Connection(state, population1)
    err = nengo.Connection(learning_signal, population1[:dim/2], modulatory=True)
    nengo.Connection(population1, population2[:dim/2], 
                     function=lambda x: np.zeros(dim/2),
                     learning_rule_type = nengo.PES(err))

s = nengo.Simulator(model)
s.run(1)

Note: this also gets thrown when the post-population is the right size but you slice into it on the error connection:

import numpy as np
import nengo
from nengo.networks import EnsembleArray
nengo.log()

model = nengo.Network()

with model: 

    dim = 6
    state = nengo.Node(output=[0,]*dim)
    learning_signal = nengo.Node(output=[1,]*(dim/2))
    population1 = nengo.Ensemble(100, dim)
    population2 = nengo.Ensemble(100, dim/2)

    nengo.Connection(state, population1)
    err = nengo.Connection(learning_signal, population1[:dim/2], modulatory=True)
    nengo.Connection(population1, population2, 
                     function=lambda x: np.zeros(dim/2),
                     learning_rule_type = nengo.PES(err))

s = nengo.Simulator(model)
s.run(1)
@studywolf studywolf added the bug label Jan 26, 2015
@hunse
Copy link
Collaborator

hunse commented Jan 27, 2015

This problem comes down to the fact that we do output slicing within the transform. So in the builder, the output of the err connection is 6-D, since the output of the transform is 6-D and includes the slice mapping. I think the best solution to this is to take the slice out of the transform, and add a CopySlice operator that copies the output of the transform in err to the correct dimensions of the target population.

That said, since err is modulatory, we never actually need to do the CopySlice. So we could put in some special case code for modulatory connections where the post-population slice is not built into the transform. This would make the build_connection function more complicated, but would let us get by without adding a CopySlice operator, at least for now.

@studywolf
Copy link
Collaborator Author

I missed the discussion where a modulatory connection needed to be created for learning connections, it's super confusing. Before I updated in the last week I used to be able to directly pass in functions as the error signal, if we could go back to doing that too it would solve this and be way easier to understand!

@hunse
Copy link
Collaborator

hunse commented Jan 27, 2015

You could pass in functions as the error signal? I'm not sure that was supported behaviour. Also, wouldn't that computation be done in Python, not in neurons. Finally, it doesn't solve this problem, since this should still be do-able; it's just a workaround.

The modulatory argument basically says "the target population is a dummy; don't actually modify it". So it lets you create a connection that computes a function, but doesn't actually send that computed value anywhere. When you then create the learning connection, it can take the output of that connection (which isn't going anywhere) and use it as the error signal. It's kind of a strange way of doing things, as it means the modulatory connection doesn't really need a post. In fact, that suggests another workaround: in the example above, make some dim / 2-dimensional object (e.g. a Node), and connect the modulatory connection to that instead.

@drasmuss
Copy link
Member

In fact, that suggests another workaround: in the example above, make some dim / 2-dimensional object (e.g. a Node), and connect the modulatory connection to that instead.

That's what I end up doing most of the time, but it's definitely a bit ugly.

We've also talked about having the error connection target the learned connection, rather than the pre/post population. If we did that, we'd have access to the slice in the post object, and could make things work that way. I think conceptually that also makes a bit more sense. The downside is that that introduces a new possible object type into the connection logic, forcing more "if isinstance(post, ...)" code, which no one likes.

@studywolf
Copy link
Collaborator Author

Yeah, that's the workaround I ended up at too. :(

It's also confusing when explaining it that the modulatory connection even
needs a post-synaptic population, since its output doesn't affect whatever
it's connecting to.

On Mon, Jan 26, 2015 at 9:37 PM, Daniel Rasmussen notifications@github.com
wrote:

In fact, that suggests another workaround: in the example above, make some
dim / 2-dimensional object (e.g. a Node), and connect the modulatory
connection to that instead.

That's what I end up doing most of the time, but it's definitely a bit
ugly.

We've also talked about having the error connection target the learned
connection, rather than the pre/post population. If we did that, we'd have
access to the slice in the post object, and could make things work that
way. I think conceptually that also makes a bit more sense. The downside is
that that introduces a new possible object type into the connection logic,
forcing more "if isinstance(post, ...)" code, which no one likes.


Reply to this email directly or view it on GitHub
#632 (comment).

@hunse
Copy link
Collaborator

hunse commented Jan 27, 2015

Yeah, it's definitely weird and un-explainable.

In terms of other options, there are a number, and I think pretty much any one would be better than how we do it now. But we had this discussion a while ago, and there wasn't any consensus (I don't think), so nothing ever got changed.

@studywolf
Copy link
Collaborator Author

Ah OK I actually meant to ask where that discussion was in my last post,
I'll do a search and see where it left off!

On Mon, Jan 26, 2015 at 9:45 PM, Eric Hunsberger notifications@github.com
wrote:

Yeah, it's definitely weird and un-explainable.

In terms of other options, there are a number, and I think pretty much any
one would be better than how we do it now. But we had this discussion a
while ago, and there wasn't any consensus (I don't think), so nothing ever
got changed.


Reply to this email directly or view it on GitHub
#632 (comment).

@hunse
Copy link
Collaborator

hunse commented Jan 27, 2015

#344 I think. Though it seems like that was settled and implemented. I'm not sure if we ever actually addressed the modulatory issue there.

@hunse
Copy link
Collaborator

hunse commented Jan 27, 2015

Also, that discussion ended with "@drasmuss gets his way again", so I'm blaming Dan for all this!

@tbekolay
Copy link
Member

The connect to connections issue is #366. For context, modulatory=True comes from Nengo 1.4, and I kept it around primarily because we don't want to lose the biological plausibility of our learning rules; of course, the fact that the modulatory connection doing have to be anywhere near the actual post population means we've already lost that, so we should figure out a better way to do this. It would also be good because right now we have an annoying disparity between unsupervised learning rules (which are straightforward, you just add a learning rule to the connection) and supervised learning rules (which are weird, as you have to make this modulatory connection, then a separate connection which is the one that has the learning rule).

Seems like a good thing to chat about at the dev meeting today at noon. If you have any suggestions that you want discussed at the meeting, proposing them this morning would be easiest so we can think about them before the meeting.


Option 1: Modulatory connections

This is what we have now.

with model:
    err_conn = nengo.Connection(err, post)
    learning_rule = nengo.PES(err_conn)
    learned_conn = nengo.Connection(pre. post, learning_rule_type=learning_rule) 

Option 2: Connect to connection, LR on learned connection

The connection to the connection provides the error signal, and the learning rule stays on the learned connection.

with model:
    learned_conn = nengo.Connection(pre, post, learning_rule_type=nengo.PES())
    err_conn = nengo.Connection(err, learned_conn)
# Will have to do validation at build-time
# to make sure a connection to learned_conn exists

OR

with model:
    learned_conn = nengo.Connection(pre, post)
    err_conn = nengo.Connection(err, learned_conn)
    learned_conn.learning_rule_type = nengo.PES(err_conn)
# Can do validation at construction-time, but requires this two-step process
# since err_conn needs learned_conn to exist,
# and therefore the learning rule needs both to exist

Option 3: Connect to connection, LR on error connection

The connection to the connection provides the error signal, and has the learning rule applied to it, even though it's the learned connection that is modified.

with model:
    learned_conn = nengo.Connection(pre, post)
    err_conn = nengo.Connection(err, learned_conn, learning_rule_type=nengo.PES())
# Validation can be done at construction-time;
# connections to a connection _must_ have a learning rule,
# and that learning rule _must_ be set up to modify what it's connected to
# (unless learning_rate is 0, etc.)

If you can think of any more options, please add a comment!

@studywolf
Copy link
Collaborator Author

Option 2 makes sense to me. Something that I usually do is specify a function on the error connection and also specify a function to initially approximate for the learned connection, so assuming that could be done like

    learned_conn = nengo.Connection(pre, post, function= lambda x: initial_approx, 
                                      learning_rule_type=nengo.PES())
    err_conn = nengo.Connection(err, learned_conn, function=lambda x: training_signal)

that makes a lot of sense to me!

@drasmuss
Copy link
Member

I think I like Option 2 the best as well. Option 3 seems nicer from an implementation perspective, but I think it's less clear semantically (it's not clear which connection is going to be the one that's learning).

@tbekolay
Copy link
Member

Yep, that'd be correct usage.

One issue about Option 2 that I just noticed from Travis's example is that it's not clear how you'd weight multiple error signal sources in this case. So, let me go through the options again in that situation (and, as an additional rub, I'll use the BCM rule on the learned connection also).


Option 1: Modulatory connections

with model:
    err_conn1 = nengo.Connection(err1, post, modulatory=True)
    err_conn2 = nengo.Connection(err2, post, modulatory=True)
    learned_conn = nengo.Connection(pre. post, learning_rule_type=[
        nengo.PES(err_conn1), nengo.PES(err_conn2), nengo.BCM()
    ]) 

Option 2a: Connect to connection, LR on learned connection, w/ magic

with model:
    learned_conn = nengo.Connection(pre, post, learning_rule_type=[
        nengo.PES(), nengo.BCM()
    ])
    err_conn1 = nengo.Connection(err1, learned_conn)
    err_conn2 = nengo.Connection(err2, learned_conn)
# Note that the learning rule for PES will be set; therefore, to weight err1
# and err2 differently, you would need to set a transform on that connection

Option 2b: Connect to connection, LR on learned connection, less magic

with model:
    learned_conn = nengo.Connection(pre, post)
    err_conn1 = nengo.Connection(err1, learned_conn)
    err_conn2 = nengo.Connection(err2, learned_conn)
    learned_conn.learning_rule_type = [
        nengo.PES(err_conn1), nengo.PES(err_conn2), nengo.BCM()
    ])
# Here, there are two PES rules which can have different learning rates

Option 3: Connect to connection, LR on error connection

with model:
    learned_conn = nengo.Connection(pre, post, learning_rule_type=nengo.BCM())
    err_conn1 = nengo.Connection(err1, learned_conn, learning_rule_type=nengo.PES())
    err_conn2 = nengo.Connection(err2, learned_conn, learning_rule_type=nengo.PES())

@hunse
Copy link
Collaborator

hunse commented Jan 27, 2015

Option 1b: Implicit modulatory connections

with model:
    err_conn1 = nengo.Connection(err1)  # `post` defaults to None, making connection modulatory
    err_conn2 = nengo.Connection(err2, function=np.sin)  # can still compute a function
    learned_conn = nengo.Connection(pre. post, learning_rule_type=[
        nengo.PES(err_conn1), nengo.PES(err_conn2), nengo.BCM()
    ]) 

Option 4: Connect to LearningRule objects on connection

with model:
    learned_conn = nengo.Connection(pre, post, learning_rule_type=[
        nengo.PES(), nengo.BCM()
    ])
    err_conn1 = nengo.Connection(err1, learned_conn.learning_rule[0])  # connect to PES
    err_conn2 = nengo.Connection(err2, learned_conn.learning_rule[1])  # connect to BCM 
        # (imagine that BCM needed an error)

or with a dictionary

with model:
    learned_conn = nengo.Connection(pre, post, learning_rule_type={
        'my_pes': nengo.PES(), 'my_bcm': nengo.BCM()
    })
    err_conn1 = nengo.Connection(err1, learned_conn.learning_rule['my_pes'])
    err_conn2 = nengo.Connection(err2, learned_conn.learning_rule['my_bcm'])
        # (imagine that BCM needed an error)

@tbekolay
Copy link
Member

Option 4! I like a lot 👍 Nice symmetry with ens.neurons too.

@drasmuss
Copy link
Member

I like 4 as well, seems like that syntax most closely matches what we
actually think is going on in the model. It also makes it clear that the
only reason you would make one of these connections is to provide an error
signal. I was wondering how we would handle the case where someone
connected to a non-learning Connection, which I think would end up being a
pretty arbitrary exception where we just say "you can't do that". But this
avoids that whole problem, so 👍.

On 27 January 2015 at 11:40, Trevor Bekolay notifications@github.com
wrote:

Option 4! I like a lot [image: 👍] Nice symmetry with ens.neurons too.


Reply to this email directly or view it on GitHub
#632 (comment).

@tbekolay
Copy link
Member

A few notes from the dev meeting:

  • Option 4 is definitely the way to go
  • The learning rule should have a size_in, size_out like neurons do (and can use the connection to get info, like neurons do for n_neurons)
    • If size_in == 0 for rules like BCM, then we ensure that you can't provide input to those learning rule types
  • If a learning rule expects input but has no incoming connections, we should warn that no connection has been set (we should make this a general network-wide thing: scan for any object that has size_in > 0 but no corresponding inputs)

hunse added a commit that referenced this issue Feb 2, 2015
- as discussed in #632
- TODO: cannot build error connection until post LearningRule has
  been built, but cannot build LearningRule until target connection
  has been built.
hunse added a commit that referenced this issue Feb 3, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.
hunse added a commit that referenced this issue Feb 3, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.
hunse added a commit that referenced this issue Feb 3, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.
hunse added a commit that referenced this issue Feb 3, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

TODO: currently cannot change the learning rule type after accessing
`Connection.learning_rule`, since a handle to the old learning rule
could be out there somewhere. But this means the user can't build
the model and then change the learning rule type to try a different one
(like in `examples/learn_unsupervised.ipynb`).
hunse added a commit that referenced this issue Feb 4, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Feb 5, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Feb 18, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Feb 18, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Feb 18, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
@tbekolay tbekolay added this to the 2.1.0 release milestone Mar 3, 2015
hunse added a commit that referenced this issue May 23, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Jun 12, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Jun 12, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Jun 12, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
hunse added a commit that referenced this issue Jun 12, 2015
Connect the error directly into a learning rule (as discussed in #632),
instead of making a modulatory connection. This is cleaner and clearer.

Learning rules are now built by their parent connections, and since
a connection to a learning rule must added to a network after
the connection containing the learning rule, the learning rule
will always be built when the builder for the error connection into
said learning rule is called.

This fixes #632 (slicing `post` in learned connections). The test
`test_learning_rules.py:test_pes_decoders_multidimensional` has been
updated to test this feature.
@hunse hunse closed this as completed in 610c124 Jun 19, 2015
@s72sue
Copy link
Contributor

s72sue commented Jul 7, 2015

I am trying to use this new interface to implement hPES (i.e., using PES and BCM together), however it doesn't seem to work at all. Following is the code I am using.

test1

Am I missing something?
Using PES alone works fine. Is this related to the learning rate scaling for BCM?

@jgosmann
Copy link
Collaborator

jgosmann commented Jul 7, 2015

I think, learning_rule_type takes a list and not a dictionary? Also, you have to initialize PES with the error connection which has to be created slightly differently.

error_conn = nengo.Connection(error, post, modulatory=True)
conn.learning_rule_type = [nengo.PES(error_conn), nengo.BCM()]

@jgosmann
Copy link
Collaborator

jgosmann commented Jul 7, 2015

Maybe, a dictionary works too. Compare.

@s72sue
Copy link
Contributor

s72sue commented Jul 7, 2015

Jan, I think modulatory connections are no longer supported. The method that you have indicated used to be correct, but the interface has now been changed to allow connections to the learning rule object instead of using modulatory connections (see option 4 in the above discussion)

@tbekolay
Copy link
Member

tbekolay commented Jul 7, 2015

Yeah, a dictionary works. I tried this out @s72sue; I think you're right that it has something to do with the learning rate for BCM. I used a learning rate of 10% the default, and it worked fine. I.e., I added the line:

conn.learning_rule_type['my_bcm'].learning_rate *= 0.1

And it seems to work. Probably this means we should lower the default BCM learning rate.

@s72sue
Copy link
Contributor

s72sue commented Jul 7, 2015

@tbekolay , agreed that the default BCM learning rate should be lowered.

Also, whats the best way to connect an error (with size_out=1) to a learning rule object with size_in=3 ? It seems like I need to specify a (3x1) transform.
image

But this gives a run time erorr in PES:
image

@tbekolay
Copy link
Member

tbekolay commented Jul 7, 2015

Hmm, that transform should work I think? Maybe it's transposed, does [[1, 1, 1]] work?

Another way that should work is a quick function:

error_conn1 = nengo.Connection(error_1, conn_1.learning_rule['my_pes'], function=lambda x: [x, x, x])

@s72sue
Copy link
Contributor

s72sue commented Jul 7, 2015

No [[1, 1, 1]] doesn't even compile. Using the function also give the same
PES error at run time as using [ [1], [1], [1] ].

On Tue, Jul 7, 2015 at 4:12 PM, Trevor Bekolay notifications@github.com
wrote:

Hmm, that transform should work I think? Maybe it's transposed, does [[1,
1, 1]] work?

Another way that should work is a quick function:

error_conn1 = nengo.Connection(error_1, conn_1.learning_rule['my_pes'], function=lambda x: [x, x, x])


Reply to this email directly or view it on GitHub
#632 (comment).

@tbekolay
Copy link
Member

tbekolay commented Jul 7, 2015

This is for sure a bug then. Can you open a separate issue for it? Thanks :D

@s72sue
Copy link
Contributor

s72sue commented Jul 7, 2015

Sure, I will create a new issue.

On Tue, Jul 7, 2015 at 4:22 PM, Trevor Bekolay notifications@github.com
wrote:

This is for sure a bug then. Can you open a separate issue for it? Thanks
:D


Reply to this email directly or view it on GitHub
#632 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

6 participants