Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow learning_rate=0 #1356

Merged
merged 1 commit into from
Sep 13, 2017
Merged

Allow learning_rate=0 #1356

merged 1 commit into from
Sep 13, 2017

Conversation

drasmuss
Copy link
Member

Motivation and context:
Often when working with a learning model it is helpful to quickly disable learning by setting the learning rate to 0. Currently we don't allow this in Nengo, which isn't the end of the world but just makes things a little awkward. I think there is pretty minimal downside to allowing it; it seems unlikely that users will accidentally set the learning rate to zero when they do actually want learning.

How long should this take to review?

  • Quick (less than 40 lines changed or changes are straightforward)

Types of changes:

  • New feature (non-breaking change which adds functionality)

Checklist:

  • I have read the CONTRIBUTING.rst document.
  • [n/a] I have updated the documentation accordingly.
  • [n/a] I have included a changelog entry.
  • [n/a] I have added tests to cover my changes.
  • [n/a] All new and existing tests passed.

@studywolf
Copy link
Collaborator

this has come up for me a bunch of times would be very handy

@nengo nengo deleted a comment from codecov-io Sep 13, 2017
Copy link
Collaborator

@jgosmann jgosmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Simulations are probably faster when deleting the learning rule, but that is definitely more awkward for quickly disabling learning rules.

Copy link
Member

@tbekolay tbekolay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I can't think of a non-stubborn reason not to allow this. Especially when thinking about a situation where running an experiment with a bunch of different learning rates, including 0 to get a baseline error rate measurement.

I'll add a changelog entry in the merge, as this does affect end users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants