forked from dmitropolsky/assemblies
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
11 changed files
with
187 additions
and
26 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
Explanation | ||
=========== | ||
|
||
.. py:currentmodule:: nn | ||
Infographic results are here: http://85.217.171.57:8097. Pick | ||
*"2020.11.26 AreaSequential assemblies"* experiment from the drop-down list. | ||
|
||
A recurrent area cell | ||
********************* | ||
|
||
.. image:: images/area.png | ||
:width: 300 | ||
|
||
The building block of computation with assemblies [1]_ is a recurrent neural | ||
network cell, called *area*, the forward pass of which is described in | ||
:ref:`usage`. The output of such an area is a binary sparse vector, formed by | ||
winner-take-all competition. For example, if an input vector multiplied by a | ||
weight yields vector :code:`z = [-3.2, 4.6, 0, 0.7, 1.9]`, then | ||
:code:`kwta(z, k=2) = [0, 1, 0, 0, 1]`. | ||
|
||
K-winners-take-all | ||
------------------ | ||
|
||
One of the properties of kWTA is that the inverse of kWTA is also kWTA, even | ||
in case of a random projection (the multiplication matrix is random, not | ||
learned). On the plot below, several images from MNIST dataset are shown | ||
on the left, their random projection & cut binary vector | ||
:math:`\bm{y} = \text{kWTA}(\bm{Wx}, k_y)`, reshaped as a matrix, in the | ||
middle, and the restored :math:`\tilde{\bm{x}} = \text{kWTA}(\bm{W^T y}, k_x)` | ||
is shown on the right. :math:`\text{dim}(\bm{y}) \gg \text{dim}(\bm{x})` | ||
condition must hold in order to restore the input signal. | ||
|
||
.. image:: images/kwta_inverse.png | ||
:width: 300 | ||
|
||
This example shows that a random projection & cut operation (kWTA followed by | ||
a multiplication by a random matrix) preserves enough information to | ||
reconstruct the input signal. | ||
|
||
|
||
How does the association work? | ||
****************************** | ||
|
||
How to associate information from two and more different modalities? For example, | ||
how to associate a picture of an elephant with the sound an elephant makes? | ||
|
||
Willshaw's model | ||
---------------- | ||
|
||
Let's define the task in mathematical terms: let `x` and `y` denote the image | ||
and the sound representation vectors of a signal respectively. Then the | ||
simplest way to associate `x` and `y` is to resort to Hebbian-like learning | ||
rule. Assuming both `x` and `y` are binary sparse vectors, we can construct | ||
the weight matrix as an outer product of `x` and `y`. This technique is | ||
described in [2]_ and implemented in :class:`AreaRNNWillshaw`. | ||
|
||
The idea behind Willshaw's paper is based on the outer product property: | ||
|
||
.. math:: | ||
(\bm{x} \otimes \bm{y}) \bm{y} = \bm{x} * (\bm{y}\bm{y^T}) \propto \bm{x} | ||
which naturally suggests the following update rule: | ||
|
||
.. math:: | ||
\begin{cases} | ||
\bm{W} = \bm{W} + \bm{x} \otimes \bm{y} \\ | ||
\bm{W} = \Theta(\bm{W}) | ||
\end{cases} | ||
where :math:`\Theta(x) = 1 ~~ \text{if} ~~ x > 0`; otherwise, it's zero. | ||
|
||
Papadimitriou's model | ||
--------------------- | ||
|
||
Willshaw's update mechanism has a limitation: the initial matrix :math:`\bm{W}` | ||
must be initialized with zeros, which poses biological plausibility problems. | ||
To alleviate this, we can use a third layer `C` to indirectly associate the | ||
parental layers `A` and `B`, as shown below. | ||
|
||
.. image:: images/area_sequence.png | ||
|
||
Area `A` encodes images, and area `B` - sound. The output of `A` and `B` is | ||
projected onto area `C`, which forms a combined image-sound representation. | ||
After several such projections (forward passes), the assemblies `A-C` and `B-C` | ||
become more and more overlapping - significantly more than by chance. This | ||
process is called `association` and described in [1]_. Following this example, | ||
when areas `A` and `B` become associated, a sound an elephant makes will | ||
reconstruct a memory of elephant pictures (and vice versa), stored in `B`, | ||
assuming, of course, the presence of backward connections from area `C` to the | ||
incoming areas, which is not covered in this tutorial. | ||
|
||
Input areas `A` and `B` can, of course, represent signals of the same modality | ||
that come from different cortical areas or layers. | ||
|
||
References | ||
---------- | ||
|
||
.. [1] Papadimitriou, C. H., Vempala, S. S., Mitropolsky, D., Collins, M., & | ||
Maass, W. (2020). Brain computation by assemblies of neurons. Proceedings of | ||
the National Academy of Sciences. | ||
.. [2] Willshaw, D. J., Buneman, O. P., & Longuet-Higgins, H. C. (1969). | ||
Non-holographic associative memory. Nature, 222(5197), 960-962. |
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,19 @@ | ||
Computation with Assemblies | ||
=========================== | ||
|
||
.. automodule:: nn | ||
PyTorch implementation of `project` and `associate` operations [1]_. | ||
|
||
|
||
.. toctree:: | ||
:maxdepth: 1 | ||
|
||
explanation | ||
usage | ||
|
||
|
||
References | ||
---------- | ||
|
||
.. [1] Papadimitriou, C. H., Vempala, S. S., Mitropolsky, D., Collins, M., & | ||
Maass, W. (2020). Brain computation by assemblies of neurons. Proceedings of | ||
the National Academy of Sciences. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
.. _usage: | ||
|
||
Usage | ||
===== | ||
|
||
.. automodule:: nn | ||
|
||
|
||
Example | ||
------- | ||
|
||
Associate script: | ||
|
||
.. code-block:: python | ||
from nn.areas import * | ||
from nn.samplers import sample_k_active | ||
from nn.simulate import Simulator | ||
N_NEURONS, K_ACTIVE = 1000, 50 | ||
n_stim_a, n_stim_b = N_NEURONS, N_NEURONS // 2 | ||
na, nb, nc = N_NEURONS * 2, int(N_NEURONS * 1.5), N_NEURONS | ||
area_A = AreaRNNHebb(N_NEURONS, out_features=na) | ||
area_B = AreaRNNHebb(N_NEURONS // 2, out_features=nb) | ||
area_C = AreaRNNHebb(na, nb, out_features=nc) | ||
area_AB = AreaStack(area_A, area_B) | ||
brain = AreaSequential(area_AB, area_C) | ||
stim_a = sample_k_active(n=n_stim_a, k=K_ACTIVE) | ||
stim_b = sample_k_active(n=n_stim_b, k=K_ACTIVE) | ||
stim_ab = (stim_a, stim_b) | ||
simulator = Simulator(model=brain, epoch_size=10) | ||
simulator.simulate(x_samples=[stim_ab]) | ||
More examples are in `nn/simulate.py <https://github.com/dizcza/assemblies/blob/master/nn/simulate.py>`_ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters