-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Learn synaptic time-constants via backpropagation #60
Comments
Yes, this would definitely be a cool thing to support, and very doable from a technical standpoint. It's basically just a matter of changing one flag from True to False. The main issue is a user interface one: how to let the user easily control which parts of the model they want to be trainable. This is part of a larger issue, as there are lots of other parameters that we could in theory allow to be trained, but currently don't (e.g., Right now this is all controlled through the trainable system. This works fairly well for the set of parameters that are trainable right now, but doesn't scale up that well. We could actually extend it fairly easily to support |
That's an interesting challenge from a design stand-point. One possibility could be to introduce an operator to mark specific values as either learnable or static. For sake of having something to look at, let's say from nengo_dl import L, S, Default
# Marking a transform as fixed (e.g., passthrough nodes)
nengo.Connection(..., transform=S(1))
# or, since 1 is the default:
nengo.Connection(..., transform=S(Default))
# Learning tau_rc while keeping tau_ref fixed (may not be possible right now?)
x = nengo.Ensemble(..., neuron_type=nengo.LIF(tau_rc=L(0.02), tau_ref=S(0.002)))
# Learning the neuron model, and the gains, but not the biases
x = nengo.Ensemble(..., neuron_type=L(Default), gain=L(Default), bias=S(Default))
# Marking a synapse as learnable
nengo.Connection(..., synapse=L(0.01)) This could further scale to learning only a subset of coefficients in a discretized transfer function, although the syntax starts to become unwieldy and the abstraction becomes somewhat leaky. But if this can be done in a way that keeps the builder extensible, then people could potentially roll their own solutions for these special use-cases. |
Quick update: This feature request is made somewhat obsolete by the new keras_spiking.Lowpass layer which learns the time-constant(s) (and initial state) of the lowpass filter for each dimension. There is also a trainable alpha filter in the same repo. The caveat is that this layer currently cannot be converted via the NengoDL converter if there is more than one time-constant in the layer. Related issue: nengo/nengo#1636. |
(Feature request) Since the synaptic operations are differentiable with respect to the coefficients in their difference equations, they can also be optimized via backpropagation.
After learning, the synaptic coefficients can be mapped back onto the time-constants of the synapse via the poles of the new discrete transfer function, as follows:
This should be useful any time the data set has some temporal dynamics. These dynamics can be learned through not only the recurrent connection, but the dynamics of the synapses (which are like miniature recurrent connections).
As a contrived yet simple example, suppose our input data is all ones, and the output data is a step-response with some exponential time-constant. If our network is feed-forward with a specific time-constant, then backpropagation could in theory could minimize the MSE by optimizing the time-constant on the synapse. However,
nengo_dl
is currently only able to reduce the error by scaling the static gain:Note: if you set
tau_actual == tau_ideal
, then the MSE becomes zero. And so the optimal solution with this architecture is to modify the time-constant on the synapse.The text was updated successfully, but these errors were encountered: