Skip to content

lantunes/synapy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Synapy

Synapy is a Python implementation of a synaptic matrix. This project represents an ongoing search for new neurobiologically inspired learning algorithms and computational techniques. It represents an exploration of alternatives to the current neural net/deep learning/backprop paradigm, which, despite being spectacularly successful in many aspects, fails to capture the same spectrum of capabilities of neurobiological intelligence.

This work is inspired by, and based on, Dr. Arnold Trehub's work. For more information, see his book, The Cognitive Brain, and visit his web page: http://people.umass.edu/trehub/

Particularly, this project is based on his concept of the synaptic matrix. For more on the synaptic matrix, see: http://people.umass.edu/trehub/thecognitivebrain/chapter3.pdf

The synaptic matrix implemented in this project supports supervised learning scenarios. During the training phase, examples consisting of a single vector are given to the synaptic matrix, along with corresponding labels. After training, the synaptic matrix can be used to classify new inputs. The result of evaluating an input is an array of numbers, the largest of which corresponds to the predicted class. These resulting numbers can be interpreted as the relative spiking rates of the classification neurons.

Usage

A synaptic matrix requires a vector input of a specified length. Ideally, the vector consists of only ones and zeroes, though arbitrary floating point values are also acceptable. To use a synaptic matrix, it must first be initialized:

synaptic_matrix = SupervisedSynapticMatrix(9, b=1, c=2, k=10)

In the snippet above, a synaptic matrix is initialized that accepts input vectors of length 9. The synaptic matrix's hyperparameters are denoted by the variables b, c, and k. These are constants used while updating the weights of synapses. For more information about these constants, see Chapter 3 of Arnold Trehub's book The Cognitive Brain.

Next, it must be trained in a supervised fashion:

example1 = [0, 1, 0,
            0, 1, 0,
            0, 1, 0]

example2 = [1, 1, 1,
            1, 0, 1,
            1, 1, 1]

synaptic_matrix.train(example1, label="line")
synaptic_matrix.train(example2, label="box")

Labels can take on any value type, and are not limited to strings.

The trained synaptic matrix can then be used to evaluate new inputs:

to_evaluate = [1, 1, 1,
               1, 0, 0,
               0, 1, 1]

self.assertEquals("box", synaptic_matrix.evaluate(to_evaluate))

The relative spike frequencies generated for any given input can also be obtained:

to_evaluate = [1, 1, 1,
               1, 0, 0,
               0, 1, 1]

self.assertEquals({"line": 16, "box": 24}, synaptic_matrix.relative_spike_frequencies(to_evaluate))

These relative spike frequencies demonstrate the degree to which one class is favored over another.

This synaptic matrix implementation was evaluated against the MNIST data set. It achieves 93% accuracy. This is far from the state-of-the-art values of >99% accuracy, but it represents (as far as I can tell) a novel neurobiologically-inspired approach towards machine learning. Additionally, there is no pre-processing of the images in the data set, nor is the data set augmented with any distorted versions of the original images. State-of-the-art techniques often involve pre-processing of the images, and the addition of more images to the data set obtained through deformation of the original images.

The synaptic matrix distinguishes itself from neural nets (as they are currently implemented) principally by addressing two common difficulties encountered in learning problems: the problem of imbalanced data, and the problem of insufficient data. Real-world data is, more often than not, imbalanced. That is, a data set consisting of examples of various classes will often contain more examples of certain classes, and fewer of others. The synaptic matrix should, in theory, be able to classify instances of one class it has seen few examples of, just as well as it can classify instances of another class it has seen many more examples of. The synaptic matrix should also, in theory, be able to learn from far fewer examples overall.

How does it Work?

A synaptic matrix is simply an m x n matrix, W. During training and evaluation, it expects an input column vector, x, with m rows. The number of rows in the synaptic matrix is equal to the number of rows in the input column vector. The input column vector, x, consists of only zeroes and ones.

A synaptic matrix is first initialized by setting all the values to one:

W = Jm,1

Note that there is only a single column in the synaptic matrix at this point.

Learning

To have the synaptic matrix learn a new example, we take the ith example, xi, and perform the following steps:

  1. Calculate the eligibility vector, E:

    E = xi o W.,i (where o represents the Hadamard product)

  2. Calculate N, the number of eligible synapses:

    N = sum {E}

  3. Update the ith column of the synaptic matrix:

    W.,i = b + E(c + kN-1)

    The variables b, c, and k are hyperparameter constants and whole numbers, where b < c << k.

  4. Expand the synaptic matrix, by adding a new column, in preparation for any new examples:

    W = [W Jm,1]

Each column in the synaptic matrix is, by convention, called a class cell. The column represents the cell's dendritic synaptic weights.

Note that once an example is learned, the index of the class cell must be associated with the label representing the class of the example. In practice, this means keeping track of an associative array of the labels of the classes that have been learned to the indices of the class cells that represent them.

Evaluating

During evaluation against an example, we simply want to find the class cell with the highest activation, or activity. The highest activity in the synaptic matrix is given by:

amax = max {xT W}

Here, amax, the maximum activity in the synaptic matrix, is the maximum of the vector-matrix product of the transpose of x with W. The class cell with the highest activity is simply:

camax = argmax {xT W}

Where camax represents the index of the class cell with the highest activity in the synaptic matrix. To determine the predicted label, we simply look up the label associated with camax in the associative label-to-index array.

Training

Training consists of the following very simple algorithm:

for-each training example xi
    Evaluate xi against W, comparing the predicted to the actual label
    if the prediction is wrong, Learn xi, else continue to the next example

About

A Python implementation of a synaptic matrix

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages