Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added wrapper around bind.py #849

Merged
merged 1 commit into from
Oct 29, 2015
Merged

Added wrapper around bind.py #849

merged 1 commit into from
Oct 29, 2015

Conversation

Seanny123
Copy link
Contributor

Resolves #847 and #452.

@Seanny123 Seanny123 added this to the 2.1.0 release milestone Sep 18, 2015

self.output = nengo.Node(size_in=1, label='output')

self.inputs = dict(A=(self.cc.A, vocab), B=(self.cc.B, vocab))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no guarantee that the vocab for A for B and output will be the same.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, should I enable arguments for both? What about the output? Should that have it's own vocab as well?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete? Um. I think having different vocabs would be a better idea. 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See updated note. I commented on the wrong line.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have vocabs for each of the inputs and outputs. Although, I can see the argument for having the same vocab for both inputs, and a different vocab for the output.

@tcstewar
Copy link
Contributor

Hmm... I'm not sure there should be an input_magnitude parameter. spa modules are meant to be used with semantic pointers (thus the name), and so all of the currently existing modules assume the input_magnitude is 1.

@tcstewar
Copy link
Contributor

I'm also not sure there should be the ability to set different Vocabularies. What should the system do if you use different Vocabularies at A, B, and the output? Should it do a conversion between them? If not, what's the use case you're envisioning? If so, what conversion?

I'd just have a single Vocab that can be set for all three....

@Seanny123
Copy link
Contributor Author

I'm not sure there should be an input_magnitude parameter

I left input_magnitude because it exists on the spa.Compare network, should it be removed from spa.Compare as well?

I'd just have a single Vocab that can be set for all three....

I don't think I understand what the purpose of a Vocab is anymore... The documentation doesn't really cover translation between Vocabs and the other use cases you folks seem to be talking about. Who should I talk to about this confusion?

@Seanny123
Copy link
Contributor Author

Okay, I talked to Jan about this and now I understand why translation between a lower dimensional Vocab to a higher dimensional Vocab would be desirable. However, I'm uncertain about translation between two Vocabs of the same dimensionality. Can someone give me a scenario where this is desirable and why this translation should happen on a spa.Bind() network instead of it's connection out to another spa module, which is what happens with every other module I've seen?

@tcstewar
Copy link
Contributor

I left input_magnitude because it exists on the spa.Compare network, should it be removed from spa.Compare as well?

Hmm... I see what you mean. I guess it's not too bad to have it in there, although I really can't think of any situation where I'd want to use the parameter, and I don't find the docstring all that helpful (in both Compare and Bind). So I'd lean to get rid of them both, but I'm on a bit of a parameter minimization kick lately. ;) So, I'm fine either way with having input_magnitude here, but at some point I'd think about a separate PR to get rid of them (but that'd be a longer discussion, I think).

@Seanny123
Copy link
Contributor Author

Given that Terry's already looked over this, I've gone and marked it as "needs merge".

correlation instead of circular convolution.
input_magnitude : float
Effective input magnitude for the multiplication.
The actual input magnitude will be this value times sqrt(2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me what this means, especially given the new product system. What do you mean by "actual input magnitude"? What multiplication? What is this for? What use case would there be for this parameter?

One possible solution here is just to remove the second line, since as far as I can tell it's both unclear and false. As I said elsewhere, I'd also be supportive of just removing this parameter altogether, since I can't think of a spa use case for it. But if the parameter is supposed to be here then it should be clear what it means.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also be supportive of just removing this parameter altogether, since I can't think of a spa >use case for it.
What's the use case outside of SPA?

As for the general comment, I made the mistake of copying the doc string from product when I should have copied it from CircularConvolution. Will fix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Seanny123, you might want to address Terry's comments. Its not very clear what the actual magnitude is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@tcstewar
Copy link
Contributor

Given that Terry's already looked over this, I've gone and marked it as "needs merge".

I'd looked at it briefly, but not given it a full review. So I changed the marking back. ;)



def test_run(Simulator, seed):
vocab = spa.Vocabulary(16)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to pass a seed or rng to vocabulary to make the semantic pointers reproducible.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might not need to specify a vocab at all. This seems like you could use the default vocabs provided by SPA. (You're overwriting the vocab variable in line 37 anyways.)

@Seanny123
Copy link
Contributor Author

@s72sue this pull request is important for our tutorial, would you mind taking a shot at reviewing it? If you have any questions about how it works or how SPA works, I would be happy to answer them. For reference, a similar pull request is #850.

Note that the unit tests are failing for this pull request, however this is due to #875 and thus can be safely ignored.

@Seanny123 Seanny123 force-pushed the spa-bind branch 5 times, most recently from d93b8e5 to 77176be Compare October 25, 2015 01:16
@Seanny123
Copy link
Contributor Author

@s72sue I fixed the tests now, so it's super ready for review now! 😄

@s72sue
Copy link
Contributor

s72sue commented Oct 26, 2015

Okay sounds good, I will review it today.

On Sat, Oct 24, 2015 at 9:19 PM, Sean Aubin notifications@github.com
wrote:

@s72sue https://github.com/s72sue I fixed the tests now, so it's super
ready for review now! [image: 😄]


Reply to this email directly or view it on GitHub
#849 (comment).

super(Bind, self).__init__(label, seed, add_to_container)
if vocab is None:
# use the default vocab for this number of dimensions
vocab = dimensions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why you are putting the dimensions in vocab. Shouldn't you use the dimensions to get the default vocab ("vocab = default_vocab" and not "vocab = dimensions")? I am assuming that vocab is supposed to be a Vocabulary object.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what happens if the vocab is not none, and the dimensions of the vocab are not equal to the dimensions passed in as the argument?.. Shouldn't we raise an error in that case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The vocab = dimensions thing is mostly my fault, and is done in all the other spa.Modules. Basically, if you set the vocab to be an integer, then then spa.SPA system kicks in and automatically does the looking for the default vocab for that dimensionality.

So, it's ugly and badly documented, but it's consistent with the other modules. it might make sense to make a separate issue to take another look at this and either document it better or replace it with something a bit more sane. Fortunately, it's not a user-facing issue....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, I just saw the logic in setattr of the spa.SPA system where it
looks for the default vocab for the specified dimensionality. Its not very
obvious and I agree that it needs to be documented better.

On Mon, Oct 26, 2015 at 2:45 PM, tcstewar notifications@github.com wrote:

In nengo/spa/bind.py
#849 (comment):

  • invert_a, invert_b : bool
  •    Whether to reverse the order of elements in either
    
  •    the first input (`invert_a`) or the second input (`invert_b`).
    
  •    Flipping the second input will make the network perform circular
    
  •    correlation instead of circular convolution.
    
  • input_magnitude : float
  •    Effective input magnitude for the multiplication.
    
  •    The actual input magnitude will be this value times sqrt(2)
    
  • """
  • def init(self, dimensions, vocab=None, n_neurons=200, invert_a=False,
  •             invert_b=False, input_magnitude=1.0, label=None, seed=None,
    
  •             add_to_container=None):
    
  •    super(Bind, self).**init**(label, seed, add_to_container)
    
  •    if vocab is None:
    
  •        # use the default vocab for this number of dimensions
    
  •        vocab = dimensions
    

The vocab = dimensions thing is mostly my fault, and is done in all the
other spa.Modules. Basically, if you set the vocab to be an integer, then
then spa.SPA system kicks in and automatically does the looking for the
default vocab for that dimensionality.

So, it's ugly and badly documented, but it's consistent with the other
modules. it might make sense to make a separate issue to take another look
at this and either document it better or replace it with something a bit
more sane. Fortunately, it's not a user-facing issue....


Reply to this email directly or view it on GitHub
https://github.com/nengo/nengo/pull/849/files#r43035299.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made an issue to discuss this #886

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Seanny123, just realized you didn't address my comment above regarding what happens if the vocab is not none, and the dimensions of the vocab are not equal to the dimensions passed in as the argument. Shouldn't we raise an error in that case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you pass in a vocabulary that doesn't match the dimensions you're using the code throws an error about a bad connection. This isn't ideal, but since this issue doesn't just affect bind.py, but all of the spa modules, I moved it to another issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to throw that error in bind.py as well, just as in all other modules. I don't think its too vague, its just missing from bind.py. What do you think would be more ideal?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aw jeez, I thought it was missing from the other modules, but it's totally not. I'll fix it now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

assert res[100] < 0.45
# significantly lower threshold than usual because of the noise
# involved in circular convolution
assert res[199] > 0.6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering where the numbers 0.45 and 0.6 come from and whether these conditions are guaranteed?
If not, we could even use res[100] < res[199]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aw jeez, you're totally right. Those numbers were picked randomly and aren't informative of the network. I should be using the same test as the circular convolution network. I'll make the switch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Anything else that needs to be changed?

@s72sue
Copy link
Contributor

s72sue commented Oct 26, 2015

LGTM

@tcstewar
Copy link
Contributor

This looks good to me too. There will have to be changes when we make our SPA I/O stuff a bit more consistent and sane, but this looks right to me now, and useful to have.

@tcstewar
Copy link
Contributor

@tbekolay I think this is ready for merging into master (and I've forgotten the right way to do that). Should there be a changelog entry, as this adds a new feature (a new SPA module)?

@tbekolay
Copy link
Member

Yep, definitely add a changelog entry (under "Improvements")

@Seanny123
Copy link
Contributor Author

Added changelog entry

@hunse hunse merged commit a3200de into master Oct 29, 2015
@hunse hunse deleted the spa-bind branch October 29, 2015 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants