-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide SP expressions as keys to the SPA associative memory #982
Conversation
Seem like it could be useful. Can you add unit tests for this? |
|
||
# If output vocabulary is not specified, use input vocabulary | ||
# (i.e autoassociative memory) | ||
if output_vocab is None: | ||
output_vocab = input_vocab | ||
output_vectors = input_vectors | ||
output_keys = input_keys |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line seems unecessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think so...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It only gets used in the else
block which initializes it itself if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And it wasn't there before ..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, looking at it now I think that line shouldn't be there. Unless @xchoo wants to share a bit more of his thoughts :)
I added it because I thought keys would be used as class attributes/passed to the network.AssociativeMemory
, but that doesn't seem to be the case. It seems like those keys are only used to create input and output vectors, if I'm not mistaking...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
+1 for this PR - I ran into a similar scenario when I put together my Cogsci submission a while ago. |
Would this be the right place to add tests: https://github.com/nengo/nengo/blob/master/nengo/spa/tests/test_assoc_mem.py? |
Exactly that place. 🎯 |
I've added a few tests but I would like to discuss how useful they are since the associative memory seems to have a lot of noise when doing this sort of mapping. So I allowed a pretty big error margin -- does this make sense at all? @xchoo @jgosmann Also, did I just mess up stuff in this branch by merging with the master? 😕 |
It's the right idea, but the error margin is too large. The proper fix would probably be to improve the associative memory to produce a more precise output instead of overshooting, but not sure how easily that can be achieved. We could also say that we don't support exact outputs like There are a few assertions in that I don't understand, but I mostly skimmed it so far.
Yes (but it's fixable). Never do merges in the Nengo development model, always to a rebase instead. (It might also make sense to configure |
Hmmmm. I think we might want to document that the associative memory works by computing the dot product between the input vector and the output vector (i.e. it is expecting the input vector magnitudes to be nominally 1). There might be some confusion if this is not stated, because a user might put in, as an input sp phrase, something like |
vec_sim = sim.data[prob] | ||
|
||
final_vecA = vec_sim[100:490].mean(axis=0) | ||
simA = np.dot(voc2.vectors, final_vecA) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in this case, we want to use the vector norm of the vector difference (or the cosine angle) as a comparison here. The dot product is not accurate when the input vectors do not have a magnitude of 1 (-C+D
and 0.3*E
are not guaranteed to have a magnitude of 1).
What do you mean by "compare the relative strengths of the vectors"? I On 23 March 2016 at 15:35, Jan Gosmann notifications@github.com wrote:
|
Normalizing the vectors to length 1 for comparison (i.e. computing the cosine angle; cp. @xchoo's comment). |
I've updated tests to use cos similarity and simplified them by presenting just one input. What the current tests are doing is:
I agree with Xuan that it might not be clear what are expected outputs, so I think documenting how AM works would fit well in the docstring for the AM module (which is currently a one-liner and would contribute to closing #885), but I think it'd make sense to make another PR for documentation. |
What would be the use case of providing an expression like You're relying on the order of the pointers in the vocab. It would be nicer (for readability) to access them by name instead. For selecting the probe values |
I guess it would have helped because there is now an additional merge commit which looks exactly like it was produced by a git pull doing a merge. To change that setting add [pull]
rebase = true to your |
56e34ea
to
3116088
Compare
That was pretty easy to fix actually. Just a
|
I've updated the tests. 😄 |
Thanks for the update @xchoo, looks good to me! Btw. Is there a way to get plots when only running these tests? This line: Regarding use-cases: I had an example where I needed keys such as |
To get plots, the command line option is |
It will save the plots in
Does this only depend on the relative scaling of A, B, and C? Or is the actual length of the vector important? |
Yeah, it was just the scaling. |
|
||
# Specify t ranges | ||
t = sim.trange() | ||
t_item1 = (t > 0.075) & (t < 0.1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know that you can use the &
operator; I always used np.logical_and
. This is so much more convenient! :)
In that case I would argue that it is different from Made a few comments in the code, but apart from that it LGTM. 🍰 |
It would be great to get this into master :) |
71ba93d
to
a2c8cab
Compare
I also ran into a situation today where this would have been handy.... So I'm kicking myself for not reviewing this earlier.... |
LGTM. |
|
I'll add the changelog entry and clean up the history. |
Looks even better to me. |
044347b
to
a483ebb
Compare
This allows to use Semantic Pointer expressions like '0.5*A + 0.3*B' as input_keys and output_keys to the AssociativeMemory. Before it was only possible to give specific Semantic Pointers like 'A'. Co-authored by: Xuan Choo <xchoo.mainframe@gmail.com>
Add changelog entry.
3ae95a3
to
ea15d64
Compare
I cleaned the history up, added the changelog entry, and added another fixup commit. If I get an ok on this, I can merge this. |
I am now utterly convinced that this is the best this commit can possibly be. |
I needed this scenario for my model, so I decided to submit a PR in case it might be of interest for others. Currently, only single keys can be passed as the input and output keys to the associative memory module. This PR slightly changes the existing code so that it also supports expressions in both input or output keys. Then something like this can be done: