-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added wrapper around bind.py #849
Conversation
|
||
self.output = nengo.Node(size_in=1, label='output') | ||
|
||
self.inputs = dict(A=(self.cc.A, vocab), B=(self.cc.B, vocab)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no guarantee that the vocab
for A
for B
and output
will be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, should I enable arguments for both? What about the output? Should that have it's own vocab as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete? Um. I think having different vocabs would be a better idea. 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See updated note. I commented on the wrong line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would have vocabs for each of the inputs and outputs. Although, I can see the argument for having the same vocab for both inputs, and a different vocab for the output.
Hmm... I'm not sure there should be an |
I'm also not sure there should be the ability to set different Vocabularies. What should the system do if you use different Vocabularies at A, B, and the output? Should it do a conversion between them? If not, what's the use case you're envisioning? If so, what conversion? I'd just have a single Vocab that can be set for all three.... |
I left
I don't think I understand what the purpose of a Vocab is anymore... The documentation doesn't really cover translation between Vocabs and the other use cases you folks seem to be talking about. Who should I talk to about this confusion? |
Okay, I talked to Jan about this and now I understand why translation between a lower dimensional Vocab to a higher dimensional Vocab would be desirable. However, I'm uncertain about translation between two Vocabs of the same dimensionality. Can someone give me a scenario where this is desirable and why this translation should happen on a |
Hmm... I see what you mean. I guess it's not too bad to have it in there, although I really can't think of any situation where I'd want to use the parameter, and I don't find the docstring all that helpful (in both Compare and Bind). So I'd lean to get rid of them both, but I'm on a bit of a parameter minimization kick lately. ;) So, I'm fine either way with having input_magnitude here, but at some point I'd think about a separate PR to get rid of them (but that'd be a longer discussion, I think). |
Given that Terry's already looked over this, I've gone and marked it as "needs merge". |
correlation instead of circular convolution. | ||
input_magnitude : float | ||
Effective input magnitude for the multiplication. | ||
The actual input magnitude will be this value times sqrt(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear to me what this means, especially given the new product system. What do you mean by "actual input magnitude"? What multiplication? What is this for? What use case would there be for this parameter?
One possible solution here is just to remove the second line, since as far as I can tell it's both unclear and false. As I said elsewhere, I'd also be supportive of just removing this parameter altogether, since I can't think of a spa use case for it. But if the parameter is supposed to be here then it should be clear what it means.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also be supportive of just removing this parameter altogether, since I can't think of a spa >use case for it.
What's the use case outside of SPA?
As for the general comment, I made the mistake of copying the doc string from product when I should have copied it from CircularConvolution. Will fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Seanny123, you might want to address Terry's comments. Its not very clear what the actual magnitude is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
I'd looked at it briefly, but not given it a full review. So I changed the marking back. ;) |
|
||
|
||
def test_run(Simulator, seed): | ||
vocab = spa.Vocabulary(16) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have to pass a seed or rng to vocabulary to make the semantic pointers reproducible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might not need to specify a vocab at all. This seems like you could use the default vocabs provided by SPA. (You're overwriting the vocab
variable in line 37 anyways.)
@s72sue this pull request is important for our tutorial, would you mind taking a shot at reviewing it? If you have any questions about how it works or how SPA works, I would be happy to answer them. For reference, a similar pull request is #850. Note that the unit tests are failing for this pull request, however this is due to #875 and thus can be safely ignored. |
d93b8e5
to
77176be
Compare
@s72sue I fixed the tests now, so it's super ready for review now! 😄 |
Okay sounds good, I will review it today. On Sat, Oct 24, 2015 at 9:19 PM, Sean Aubin notifications@github.com
|
super(Bind, self).__init__(label, seed, add_to_container) | ||
if vocab is None: | ||
# use the default vocab for this number of dimensions | ||
vocab = dimensions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure why you are putting the dimensions in vocab. Shouldn't you use the dimensions to get the default vocab ("vocab = default_vocab" and not "vocab = dimensions")? I am assuming that vocab is supposed to be a Vocabulary object.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, what happens if the vocab is not none, and the dimensions of the vocab are not equal to the dimensions passed in as the argument?.. Shouldn't we raise an error in that case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The vocab = dimensions
thing is mostly my fault, and is done in all the other spa.Module
s. Basically, if you set the vocab to be an integer, then then spa.SPA
system kicks in and automatically does the looking for the default vocab for that dimensionality.
So, it's ugly and badly documented, but it's consistent with the other modules. it might make sense to make a separate issue to take another look at this and either document it better or replace it with something a bit more sane. Fortunately, it's not a user-facing issue....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I just saw the logic in setattr of the spa.SPA system where it
looks for the default vocab for the specified dimensionality. Its not very
obvious and I agree that it needs to be documented better.
On Mon, Oct 26, 2015 at 2:45 PM, tcstewar notifications@github.com wrote:
In nengo/spa/bind.py
#849 (comment):
- invert_a, invert_b : bool
Whether to reverse the order of elements in either
the first input (`invert_a`) or the second input (`invert_b`).
Flipping the second input will make the network perform circular
correlation instead of circular convolution.
- input_magnitude : float
Effective input magnitude for the multiplication.
The actual input magnitude will be this value times sqrt(2)
- """
- def init(self, dimensions, vocab=None, n_neurons=200, invert_a=False,
invert_b=False, input_magnitude=1.0, label=None, seed=None,
add_to_container=None):
super(Bind, self).**init**(label, seed, add_to_container)
if vocab is None:
# use the default vocab for this number of dimensions
vocab = dimensions
The vocab = dimensions thing is mostly my fault, and is done in all the
other spa.Modules. Basically, if you set the vocab to be an integer, then
then spa.SPA system kicks in and automatically does the looking for the
default vocab for that dimensionality.So, it's ugly and badly documented, but it's consistent with the other
modules. it might make sense to make a separate issue to take another look
at this and either document it better or replace it with something a bit
more sane. Fortunately, it's not a user-facing issue....—
Reply to this email directly or view it on GitHub
https://github.com/nengo/nengo/pull/849/files#r43035299.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made an issue to discuss this #886
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Seanny123, just realized you didn't address my comment above regarding what happens if the vocab is not none, and the dimensions of the vocab are not equal to the dimensions passed in as the argument. Shouldn't we raise an error in that case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you pass in a vocabulary that doesn't match the dimensions you're using the code throws an error about a bad connection. This isn't ideal, but since this issue doesn't just affect bind.py
, but all of the spa modules, I moved it to another issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to throw that error in bind.py as well, just as in all other modules. I don't think its too vague, its just missing from bind.py. What do you think would be more ideal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aw jeez, I thought it was missing from the other modules, but it's totally not. I'll fix it now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
assert res[100] < 0.45 | ||
# significantly lower threshold than usual because of the noise | ||
# involved in circular convolution | ||
assert res[199] > 0.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering where the numbers 0.45 and 0.6 come from and whether these conditions are guaranteed?
If not, we could even use res[100] < res[199]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aw jeez, you're totally right. Those numbers were picked randomly and aren't informative of the network. I should be using the same test as the circular convolution network. I'll make the switch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Anything else that needs to be changed?
LGTM |
This looks good to me too. There will have to be changes when we make our SPA I/O stuff a bit more consistent and sane, but this looks right to me now, and useful to have. |
@tbekolay I think this is ready for merging into master (and I've forgotten the right way to do that). Should there be a changelog entry, as this adds a new feature (a new SPA module)? |
Yep, definitely add a changelog entry (under "Improvements") |
Added changelog entry |
Called `spa.Bind`.
Resolves #847 and #452.