Skip to content

gokererdogan/LearningOptimalSpikeBasedRepresentations

Repository files navigation

Learning Optimal Spike-based Representations

These scripts implement the spiking neural network model proposed in Bourdoukan R, Barrett DGT, Machens CK, Deneve S (2012). Learning optimal spike based representations, Advances in Neural Information Processing Systems (NIPS) 25. Here is the abstract for the paper

How do neural networks learn to represent information? Here, we address this question by assuming that neural networks seek to generate an optimal population representation for a fixed linear decoder. We define a loss function for the quality of the population read-out and derive the dynamical equations for both neurons and synapses from the requirement to minimize this loss. The dynamical equations yield a network of integrate-and-fire neurons undergoing Hebbian plasticity. We show that, through learning, initially regular and highly correlated spike trains evolve towards Poisson-distributed and independent spike trains with much lower firing rates. The learning rule drives the network into an asynchronous, balanced regime where all inputs to the network are represented optimally for the given decoder. We show that the network dynamics and synaptic plasticity jointly balance the excitation and inhibition received by each unit as tightly as possible and, in doing so, minimize the prediction error between the inputs and the decoded outputs. In turn, spikes are only signalled whenever this prediction error exceeds a certain value, thereby implementing a predictive coding scheme. Our work suggests that several of the features reported in cortical networks, such as the high trial-to-trial variability, the balance between excitation and inhibition, and spike-timing dependent plasticity, are simply signatures of an efficient, spike-based code.

This repo contains the below files

  • fig1.m: Implements learning in the single neuron case plotted in Fig1 in the paper.
  • fig1_multi.m: Implements learning in a multi-neuron homogeneous network as plotted in Fig1 in the paper.
  • fig2.m: Implements learning in a heterogeneous network as plotted in Fig2 in the paper.
  • analyze_network.m: Calculates Fano factors, interspike interval distribution and coefficent of variation for the network.
  • compare_with_rate_code.m: Compares the network predictions to a Poission rate-code model. Note the optimal weights for the rate-code model are not estimated; this may not be a fair comparison of the two models.
  • tuning_corr_cov.m: Estimates tuning curves in a heterogeneous network. Calculates response covariance and correlation matrices.
  • simulate_network.m: This function simulates a heterogeneous network without learning. This function is used in other scripts to simulate the network.
  • fig folder contains figures generated by these scripts.

About

Implementation of a spiking neural network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages