Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support any synapse for learning rules #1095

Merged
merged 1 commit into from
Apr 13, 2018
Merged

Support any synapse for learning rules #1095

merged 1 commit into from
Apr 13, 2018

Conversation

arvoelke
Copy link
Contributor

@arvoelke arvoelke commented Jun 15, 2016

Description:

  • Replaces pre_tau, post_tau, and theta_tau with pre_synapse, post_synapse, and theta_synapse respectively on all learning rules. This supports the use of arbitrary synapse models on the activities used for learning.
  • Makes the learning_rate parameter always first in the list for consistency.
  • Fixes theta_tau to only do one level of filtering (used to be filtered twice).

Motivation and context:
We might not always want to filter the activities with a Lowpass. This allows arbitrary filters for learning (e.g. Alpha). I found this useful for RL in a dynamic context (the filter defines a kernel, which determines the state to update with respect to the reward). This also incidentally fixes a bug (missing issue) where we couldn't pass pre_tau=None or pre_tau=0 as a parameter.

How has this been tested?
A test was added for PES using pre_synapse=Alpha(...), which checks that the probed activities from the learning rule are identical to the alpha-filtered neural activities. The other learning rules use the same code paths (i.e. build_or_passthrough), that are also tested with Lowpass synapses.

Where should a reviewer start?
Start in learning_rules.py, in particular the PES class. Then move on to builder/learning_rules.py.

How long should this take to review?

  • Average (neither quick nor lengthy)

Types of changes:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that causes existing functionality to change)

Checklist:

  • I have read the CONTRIBUTING.rst document.
  • I have updated the documentation accordingly.
  • I have included a changelog entry.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

Still to do:

  • Add this URL to ObsoleteParam.
  • Add tests.
  • Add documenting example.
  • Apply changes to PES.
  • Apply changes to BCM.
  • Apply changes to Oja.
  • Apply changes to Voja.

@arvoelke arvoelke added this to the 2.2.0 release milestone Jun 15, 2016
@mention-bot
Copy link

By analyzing the blame information on this pull request, we identified @tbekolay to be a potential reviewer

@tbekolay
Copy link
Member

Awesome, this is definitely something I've wanted to do for a while!

nengo/params.py Outdated

def equal(self, instance_a, instance_b):
return True # otherwise __get__ throws an error

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering out loud whether there's a better way to do __get__ that avoids having to do this. Also, I would bust these out into a separate commit since they're not directly related to the main change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure... but moved these changes to a separate commit.

@hunse
Copy link
Collaborator

hunse commented Jun 15, 2016

Yeah, this is a great thing to do.

@tbekolay tbekolay removed this from the 2.2.0 release milestone Jun 27, 2016
@arvoelke arvoelke force-pushed the pes-synapse branch 3 times, most recently from 2b1c223 to 5a19d0f Compare July 11, 2016 05:05
@arvoelke
Copy link
Contributor Author

Updated the BCM/Oja/Voja rules, changelog, made docstrings consistent, made use of the SupportDefaultsMixin for all default params, and made the parameter order consistent. Still to do: tests, and an example using the alpha synapse.

@tbekolay Why was this removed from the 2.2.0 release milestone?

@arvoelke
Copy link
Contributor Author

arvoelke commented Jul 11, 2016

Also if we are happy with the way this uses the param system, then the same changes should be made to synapses.py and processes.py. In particular, the base Process should inherit SupportDefaultsMixin to fully harness the param system and its defaults. Does this sound good?


@property
def _argreprs(self):
args = []
if self.learning_rate != 1e-4:
args.append("learning_rate=%g" % self.learning_rate)
if self.pre_tau != 0.005:
args.append("pre_tau=%f" % self.pre_tau)
if self.pre_synapse != Lowpass(tau=0.005):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can I access a param's default?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PES.pre_synapse.default

@tbekolay
Copy link
Member

@tbekolay Why was this removed from the 2.2.0 release milestone?

See the discussion in #1080, which led to #1114.

@arvoelke
Copy link
Contributor Author

Okay this is now ready for review.

To check that my improvements to __repr__ did not mess anything up, I verified that this snippet from examples/usage/strings.ipynb...

print(nengo.PES())
print(nengo.PES(learning_rate=1e-6, pre_synapse=0.01))
print(nengo.BCM())
print(nengo.BCM(learning_rate=1e-8, pre_synapse=0.01, post_synapse=0.005, theta_synapse=10.0))
print(nengo.Oja())
print(nengo.Oja(learning_rate=1e-5, pre_synapse=0.01, post_synapse=0.005, beta=0.5))

... displays the right thing:

PES()
PES(learning_rate=1e-06, pre_synapse=Lowpass(0.01))
BCM()
BCM(learning_rate=1e-08, pre_synapse=Lowpass(0.01), post_synapse=Lowpass(0.005), theta_synapse=Lowpass(10.0))
Oja()
Oja(learning_rate=1e-05, pre_synapse=Lowpass(0.01), post_synapse=Lowpass(0.005), beta=0.5)

Perhaps that whole notebook should be made into a series of unit tests?

Also an important note:

  • The theta signal for BCM used to be filtered twice. I don't think we want this. It's now filtered once. If you want it filtered twice, you can use a double-exponential synapse. This seemed too subtle to include in the changelog?

@tbekolay
Copy link
Member

I haven't done a detailed review of this yet, but one discussion point from a quick look at the diff is that the API changes in this PR are backwards-incompatible, meaning that we'd need to do a 3.0.0 release when this is merged. Since we allow numbers as a shortcut for Lowpass(tau=number), what do you think about making pre_tau etc deprecated parameters that just set pre_synapse etc for now, but print a warning and will be removed in 3.0.0?

@arvoelke
Copy link
Contributor Author

Maybe we can save this for 3.0.0 and avoid the duplication of parameters altogether? For me it's more pain than worth to maintain these two code paths.

@tbekolay
Copy link
Member

The deprecated parameters can be like this:

@property
def pre_tau(self):
    warnings.warn("Deprecated since 2.2.0; use pre_synapse instead")
    return self.pre_synapse

@pre_tau
def pre_tau(self, val):
    warnings.warn("Deprecated since 2.2.0; use pre_synapse instead")
    self.pre_synapse = val

But either way, up to you!

@arvoelke
Copy link
Contributor Author

Updated into a form ready for 3.0.0. Also looping in @neworderofjamie since this will affect SpiNNaker if it gets in.

"print(nengo.Oja())\n",
"print(nengo.Oja(pre_tau=0.01, post_tau=0.005, beta=0.5, learning_rate=1e-5))"
"print(nengo.Oja(learning_rate=1e-5, pre_synapse=0.01, post_synapse=0.005, beta=0.5))"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: Add one for Voja? Also consider making these into unit tests?

@neworderofjamie
Copy link

Thanks @arvoelke - I don't think we're actually supporting any kind of filtering in our PES or Voja implementations at the moment - so perhaps we should implement that in a more generic way using @mundya 's code for the synapses

@tbekolay tbekolay removed this from the 3.0.0 release milestone Aug 8, 2016
@tcstewar
Copy link
Contributor

tcstewar commented Jan 11, 2017

So, @arvoelke just let me know this branch exists, and it does exactly what @bjkomer and I were wanting with #1256 . We'll try it out! Thank you!

Also, if it is possible for this to be done in a backwards-compatible manner with a few deprecation warnings (and no large code duplication), then it would be great to do so, even if we do put off merging it in till 3.0.0. I'll take a look and see if @tbekolay 's suggestion of how do to it could end up being reasonable....

@arvoelke
Copy link
Contributor Author

arvoelke commented Jan 11, 2017

If you go to any of the previous commits (e.g., e41f7e3) then it will be backwards-compatible. @tbekolay's suggestion would make it a bit less muddy, but still the same in principle.

@drasmuss
Copy link
Member

drasmuss commented Jan 2, 2018

Rebased this to master and fixed things up. I also made it so that this change is backwards compatible (as suggested by @tbekolay), so no need to wait for a major release. Should be ready for review.

return args
return (('learning_rate', BCM.learning_rate.default),
('pre_synapse', BCM.pre_synapse.default),
('post_synapse', self.pre_synapse),
Copy link
Contributor

@Seanny123 Seanny123 Jan 3, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a typo and should be ('post_synapse', BCM.post_synapse.default) at least for the sake of consistency?

Edit: I noticed this happens in all the instances of the learning rules. Is there a reason for this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The post_synapse defaults to the value of pre_synapse, so that's the value we want to check against (to see if the user set it to something other than the default).

@@ -20,7 +23,7 @@ def coerce(self, instance, size_in):
instance, size_in) # IntParam validation


class LearningRuleType(FrozenObject):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the idea behind FrozenObjects is that they're not supposed to support Defaults. I remember the idea behind this was so that FrozenObjects could be used in multiple places (e.g. the same learning rule type on multiple connections) or copied without issue. But I can't remember the exact details as to why having Defaults and being Frozen could not go together.

Part of the reason might be because FrozenObjects themselves are often used as defaults for other things. For example, the default for Ensemble.neuron_type is LIF(). If LIF had its own defaults for tau_rc and tau_ref, then changing model.config[nengo.LIF].tau_rc could change the default tau_rc for an ensemble, but also change that default other places, too. Things could get complicated or have unintended consequences.

For learning rules, that's less of an issue, since they're currently not used as defaults anywhere that I know of. My main point is just that I had designed FrozenObject to be separate from the Default system, so if we're bringing them together sometimes, I just want to make sure that that makes sense and doesn't cause any unforeseen problems.

Copy link
Member

@drasmuss drasmuss Jan 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's the reason all the parameters are marked as readonly, so that they can be used with FrozenObject (this is explicitly checked for in FrozenObject, so I think it is part of the intended/expected behaviour). I thinkkk having everything be readonly resolves most of those concerns (since you can't change the values of a FrozenObject after using it somewhere else). But we could add some unit tests, if there are any use cases you have in mind that you think might produce weird behaviour.

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM! I made a few minor changes in fixup commits. Also note that the change to ObsoleteParam doesn't seem to be necessary anymore, so I removed it (there's a test for it in the history here, but that will be squashed in the merge).

If no objections, I'll squash and merge after lunch.

@arvoelke
Copy link
Contributor Author

Could you please include a link to http://compneuro.uwaterloo.ca/publications/voelker2017c.html alongside the other tech report that is currently linked in the PES notebook? This gives useful information for understanding how different synapses on the error connection will affect the dynamics, and a heuristic for setting the learning rate. Thanks!

@tbekolay
Copy link
Member

Could you please include a link to http://compneuro.uwaterloo.ca/publications/voelker2017c.html alongside the other tech report that is currently linked in the PES notebook?

Absolutely, will do 👍

Deprecates learning rule tau parameters.

Co-authored-by: Daniel Rasmussen <daniel.rasmussen@appliedbrainresearch.com>
@tbekolay tbekolay merged commit a505c5e into master Apr 13, 2018
@tbekolay tbekolay deleted the pes-synapse branch April 13, 2018 20:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants