Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backpropagation engenders Hebbian learning #3

Open
clockwiser opened this issue Oct 6, 2019 · 1 comment

Comments

@clockwiser
Copy link

commented Oct 6, 2019

It is an idea very interesting.

Think about it: from the point of view of a neuron that spikes, upon performing backpropagation, the neuron will backpropagate to its inputs for which the input signal was received more recently than not, hence the signal that really contributed to firing. Why? Because older signals are decayed exponentially and their gradients vanish towards zero. So with SNNs, the gradients are mostly transferred to input that was fired just before self-firing to an output from the point of view of a neuron.

What is the gradient here?
Generally gradient comes from loss, trying to minimize the loss. But here what is the loss?

@guillaume-chevalier

This comment has been minimized.

Copy link
Owner

commented Oct 7, 2019

Cool!

Generally speaking, I think that the loss might perhaps be some kind of reward function related to pain or pleasure. And maybe with some kind of GAN discriminator function of pleasure/pain, but more like in a Reinforcement Learning algorithm.

Or also I got an idea that it could perhaps make use of brain rhythms. E.g.: upon completing each "rhythmic" cyles, neurons could compute some sort of difference between the last pass and the new pass of information, such that it learns to autoencode information better to reduce the number of cycles needed to process, something akin to Contrastive Divergence (CD) in autoencoders. Like this, the learning is unsupervised and doesn't need a reward function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.