Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Backpropagation engenders Hebbian learning #3
It is an idea very interesting.
Think about it: from the point of view of a neuron that spikes, upon performing backpropagation, the neuron will backpropagate to its inputs for which the input signal was received more recently than not, hence the signal that really contributed to firing. Why? Because older signals are decayed exponentially and their gradients vanish towards zero. So with SNNs, the gradients are mostly transferred to input that was fired just before self-firing to an output from the point of view of a neuron.
What is the gradient here?
Generally speaking, I think that the loss might perhaps be some kind of reward function related to pain or pleasure. And maybe with some kind of GAN discriminator function of pleasure/pain, but more like in a Reinforcement Learning algorithm.
Or also I got an idea that it could perhaps make use of brain rhythms. E.g.: upon completing each "rhythmic" cyles, neurons could compute some sort of difference between the last pass and the new pass of information, such that it learns to autoencode information better to reduce the number of cycles needed to process, something akin to Contrastive Divergence (CD) in autoencoders. Like this, the learning is unsupervised and doesn't need a reward function.