Skip to content
David A Roberts edited this page Dec 2, 2015 · 1 revision

A Hopfield net is a recurrent neural network having synaptic connection pattern such that there is an underlying Lyapunov function for the activity dynamics. Started in any initial state, the state of the system evolves to a final state that is a (local) minimum of the Lyapunov function.



There are two popular forms of the model:

  • Binary neurons with discrete time, updated one at a time
<math>V_j(t+1) =
\begin{cases} 1, & \mbox{ if } \ \Sigma_k T_{jk}V_k(t) + I_j>0 \\ 0, & \mbox{ otherwise } \end{cases} </math>
  • Graded neurons with continuous time
<math>dx_j/dt = -x_j/\tau + \Sigma_k T_{jk}g(x_k) + I_j\ .</math>
Here,
  • <math>V_j</math> denotes activity of the <math>j</math>-th neuron.
  • <math>x_j</math> is the mean internal potential of the neuron.
  • <math>I_j</math> is direct input (e.g., sensory input or bias current) to the neuron.
  • <math>T_{jk}</math> is the strength of synaptic input from neuron <math>k</math> to neuron <math>j\ .</math>
  • <math>g</math> is a monotone function that converts internal potential into firing rate output of the neuron, i.e., <math>V_j=g(x_j)\ .</math>

Table of Contents

Computers as dynamical systems

All real computers are dynamical systems that carry out computation through their change of state with time. A simple digital computer can be thought of as having a large number of binary storage registers. The state of the computer at a particular time is a long binary word. A computation is begun by setting the computer in an initial state determined by standard initialization + program + data. At each tick of the computer clock the state changes into another state, following a rule that is built in by the design of the machine. After a while the computer stops changing its state, and the answer to the computation is read out from a subset of the storage registers. A fragment of a two dimensional representation of the motion in state space is shown in <figref>Hopfield_Net_fig1.png</figref>.

Two-dimensional representation of motion in state space. Transitions to states outside this fragment are not indicated

The digital states are represented by small blue dots, the state transitions by the arrow lines, and final states (which have no transitions leading out of them) by large red dots.

Neural network computation or analog computation can similarly be described as state space flows but without the discretization of state variables and time. For general neural networks, the only way to understand what future state will evolve after a long time is by following the dynamics in detail.

Hopfield networks have a Lyapunov function <math>E</math> (or ‘energy function’) behind their dynamics which leads to an understanding of possible final states. The Lyapunov function decreases in a monotone fashion under the dynamics, and is bounded below. Computation in this system can now be thought of as being done by starting from an initial state from which a final state evolves by moving downhill on <math>E\ .</math> Because of the existence of an elementary Lyapunov function for the dynamics, the only possible asymptotic result is a state on an attractor. Hopfield networks have been shown to be capable of universal computation in the Turing sense.

Binary neurons

The original Hopfield net [1982] used model neurons with two values of activity, that can be taken as 0 and 1. The strength of the synaptic connection from neuron <math>k</math> to neuron <math>j</math> is described by <math>T_{jk}\ .</math> The state vector <math>V</math> of the network at a particular time <math>t</math> has components <math>V_j</math> describing the activity of neuron <math>j</math> at time <math>t\ .</math> The dynamics of the system are defined as follows:

  • Activity of neuron <math>j</math> is <math>V_j</math>
  • Strength of synaptic connection from neuron <math>k</math> to neuron <math>j</math> is <math>T_{jk}</math>
  • Direct input (e.g. sensory input or bias current) to neuron <math>j</math> is <math>I_j</math>
  • Overall input to neuron <math>j</math> is <math>x_j = \Sigma_kT_{jk}V_k + I_j</math>
Dynamical update of state:
  • choose a neuron <math>p</math> at random.
  • If <math>x_p > 0</math> set <math>V_j = 1</math> else <math>V_j = 0\ .</math>
  • Iterate this process.
If there are no self-connections (i.e. <math>T_{kk} = 0</math> for all <math>k</math>) and the connections are symmetric (<math>T_{jk}=T_{kj}</math> for all <math>j, k</math>) then this system has the Lyapunov function
<math>E = - \frac{1}{2}\Sigma_{jk} T_{jk}V_jV_k - \Sigma_j I_jV_j\ .</math>
This <math>E</math> is also a Lyapunov function if the neurons are repetitively updated one at a time in any order.

Positive values of <math>T</math> represent excitatory connections, and negative values inhibitory connections. Minor transformations of variables relate this system to a magnetic Ising system, with <math>T_{jk}</math> equivalent to the exchange <math>J_{jk}</math> between Ising spins <math>j</math> and <math>k\ .</math> A related set of automata networks, which use synchronous update of all neurons at once, has been extensively studied.

The dynamical update described above can be thought of as a Monte Carlo procedure or a Glauber dynamics for the change of state of a thermodynamic system at zero temperature, where the energy of the system is described by <math>E\ .</math> The Boltzmann Machine uses a Hopfield net with stochastic update to simulate a non-zero temperature.

Graded neuron response and continuous variables

Let the stochastic binary neurons be replaced by neurons whose instantaneous activity increases with input, so that <math>V_j = g(x_j)</math> where <math>g</math> is any monotone increasing function, bounded below and bounded above. In this representation, details of action potential timing are suppressed, and action potentials are thought of as being generated in a stochastic fashion with an instantaneous rate <math>g(x)\ .</math> The customary bounds on <math>g(x)</math> are thus often taken as zero and the maximum firing rate. Let synaptic currents lag behind the firing rate with a simple exponential kernel <math>\exp( -t/\tau)\ ,</math> corresponding in real neurons to a synaptic current that rises rapidly then decays exponentially. Within this description, the evolution of the state of the network is given by the ordinary differential equations

<math>dx_j/dt = - x_j/\tau + \Sigma_{jk} T_{jk}V_k + I_j</math>
When <math>T</math> is symmetric, this dynamics [1984] has the Lyapunov function
<math> E = - \frac{1}{2} \Sigma_{jk} T_{jk}V_jV_k - \Sigma_j I_jV_j + \frac{1}{\tau} \Sigma_j \int^{V_j} g^{-1}(Z)dZ</math>
where <math>g^{-1}</math> is the inverse of the gain function <math>g\ .</math> There is a significant limiting case of this function when <math>T</math> has no diagonal elements and the input-output relation becomes a step, going from 0 to a maximum firing rate (for convenience, scaled to 1). The third term in this Lyapunov function is then zero or infinite, and this <math>E</math> is then identical to the function for binary stochastic neurons except that now the third term restricts the domain to <math>0 \leq V_j \leq 1</math> on which the third term is zero, instead of the earlier <math>V_j = 0</math> or <math>1\ .</math> With no diagonal elements in <math>T\ ,</math> the minima of <math>E</math> are all located at corners of the hypercube <math>0 \leq V_j \leq 1\ .</math> In this limit, the stable states of the continuous variable system are exactly the same as the stable states of the binary system.

Illustration: 2 neuron flip-flop

A 2-neuron case with reciprocal inhibition illustrates the behavior. The input-output relation used for the neurons is shown in the first panel of <figref>Hopfield_Net_fig2.png</figref>. Excitation was provided by setting <math>I_1 = I_2 > 0\ .</math>

Phase portrait of 2-neuron Hopfield Network.

The second panel shows the trajectories of the system in the <math>(V_1, V_2)</math> phase plane from a variety of starting states. Each trajectory starts at the end of a black line, and the activity moves along that line to ultimately terminate in one of the two point attractors located at the two red symbols "*". The third panel presents contour lines and a color map of <math>E\ .</math> The low <math>E</math> contours are drawn. The color map shows the higher values of <math>E\ ,</math> progressively larger from black to blue to yellow to red. The red dots of the central panel are located at the bottoms of the two minima seen in the third panel.

Associative memory

The phenomenon of associative memory matches the idea of dynamics controlled by a Lyapunov function. Consider a set of states <math>M^p</math> that are desired as memories. The location of each memory in state space describes the attributes of the memory. The idea of associative memory is that when a memory clue is presented, the actual memory that is most like the clue will be recapitulated. Suppose synaptic connections are constructed so that each memory vector <math>M^p</math> is a local minimum of <math>E\ .</math> Starting with partial information about some memory <math>s</math> means starting relatively nearer to <math>M^s</math> than to other memories. This starting state is then likely to be within the ‘valley’ of the terrain <math>E</math> that has <math>M^s</math> as its lowest point. If so, the dynamics must result in the final state <math>M^s\ ,</math> the correct memory reconstruction. Such a reconstruction is illustrated

Memory recall from a small clue.

for a memory having 50 properties (e.g. name) and within each property 20 values (e.g., John, Mary). This memory contains 250 ‘known friends’. A particular ‘known friend’ can be described by a set of 1’s locating the values for each property, and the information about one particular friend is shown in the grid at the left. Memory retrieval begins from a clue, an initial state describing partial information, as typified in the second grid where values are indicated for 7 categories. The next two panels show intermediate states during the retrieval process, and the final stable state is shown in the last panel. Careful inspection of the figure will verify that the final state is the memory from which the clue was taken, except that the clue itself was slightly erroneous (an incorrect value for property 2) and the memory dynamics properly corrects that error.

When the number of memories to be contained in <math>T</math> is not too large, and the memories not correlated, a new memory <math>M^{\rm new}</math> can be incorporated in the system through a synapse change rule

<math>T_{ij}</math> (including the new memory) <math> = T_{ij} </math> (without) <math> + M_i^{\rm new}M_j^{\rm new}</math>
This is a Hebbian rule in that the synapse change is proportional to the product of the activity of the pre- and post-synaptic neurons. This rule is appropriate to the case of neurons with binary activity represented as <math>V=\pm1\ ,</math> and memories chosen at random, and requires modifications for more general circumstances.

An excitatory-inhibitory generalized Hopfield network can implement an associative memory that is rather more biological in its details. It no longer requires memories to be uncorrelated; synaptic connections obey Dale’s Law; neuron activity is limited by inhibitory feedback rather than by the maximum firing rate of a neuron; and memories are learned through changes in excitatory synapses only. In the 250 memory example above, a learning rule of the Hebbian type yielded completely accurate recall of 232 of the memories starting from clues containing 20% of the information in a memory. The number of memories that can be stored without disaster scales with the number of neurons. Neuronal noise will decrease the accuracy of recall, but the effect of the noise decreases with the size of the system, since the height of the 'barriers' in E separating different memories scales with the size. For large systems, the operation thus becomes collective, and the overall behavior is fail-soft to damage.

Generalized Hopfield networks: other networks equivalent to symmetric networks

Lyapunov functions can be constructed for a variety of other networks that are related to the above networks by mathematical transformation or simple extensions. For example, if <math>T_{jk}</math> is a symmetric matrix, and <math>\lambda_j</math> and <math>\mu_k</math> are vectors with all positive components, a network connected through a matrix <math>S_{jk} = T_{ij} \lambda_j \mu_k</math> also has a Lyapunov function.

A broader class of related networks can be generated through using additional ‘fast’ neurons whose inputs and outputs are related in a way that produces an equivalent direct pathway that is symmetric. For example, if an additional neuron receives input <math>\Sigma_k A_kV_v</math> and has an instantaneous output <math>f(\Sigma_k A_kV_v)\ ,</math> where <math>f</math> is the derivative of a monotone function <math>F\ ,</math> and this in turn has connections that result in a current in cell <math>j</math> that is <math>A_j f(\Sigma_k A_kV_v)\ ,</math> the Lyapunov function simply requires the additional term <math>F(\Sigma_k A_kV_v)\ .</math> Simulations show that it is not necessary that such additional neurons be infinitely fast, but only that they are fast enough that the overall system does not oscillate.

When this additional neural circuitry is inhibitory, it can implement constraints on the overall behavior of the system. <math>A_kV_k = B</math> describes a plane in activity (<math>V</math>) space. When <math>f</math> is a function with a rapid onset, this inhibition can constrain the activity of the network to be on one side of the plane.

Related mathematics has been used to understand simple Hopfield networks with synchronous update, to special time-dependent problems, and to some models of synchronization of action potentials.

Computation through optimization

Optimizations are common computational problems, and come in many forms. Finding the best straight line through a set of (x, y) data points, linear programming, and finding the shortest Traveling Salesman route to visit a set of cities, are typical examples. Other problems that are not normally thought of as optimizations can often be restated in the language of optimization. For example, a Sudoku puzzle can be described in terms of ‘place the largest number of integers 1-9 into the blank spaces in the puzzle that you can, without violating any of the rules’. The correct solution is the unique pattern which has no blanks left at all, and thus has maximized the number of entries the puzzle-solver has put in.

Many optimization problems can be readily represented on Hopfield nets, by transforming the problem into variables such that the desired optimization corresponds to the minimization of the respective Lyapunov function [Hopfield] In this representation, the dynamics of change in network state with time takes the system to a local energy minimum. If this local minimum is also the global minimum, the solution of the desired optimization task has been carried out by the convergence of the network state. Indeed, the energy function can be thought of as a programming language for transforming optimization problems into a solution method applying network dynamics. The resulting network could be either built in analog hardware or implemented in software on a digital machine.

Linear programming, the worker assignment problem, and decomposing signals into a basis set can all be solved exactly by Hopfield networks because the Lyapunov function for these problems can be constructed with a single (and thus global) minimum. When more computationally difficult problems are programmed using this approach, the Lyapunov function often has multiple local minima, and the dynamics of the network may converge to a local minimum rather than to the global minimum. Finding a good half-tone image from a gray-scale photograph and the n-queens chess problem can be programmed in this way. How effective such a network can be in finding a good solution is strongly dependent on the problem class.

References

Golez, E. and Servet, M. (1990) Neural and Automata Networks, Kluwer Academic Publisher, Boston MA.

Hertz, J., Krogh A., and Palmer, R.G. (1991) Introduction to the theory of neural computation, Addison Wesley, Redwood City CA.

Hopfield, J. J. (1982) Neural networks and physical systems with emergent collective computational properties. Proc. Nat. Acad. Sci. (USA) 79, 2554-2558.

Hopfield, J. J. (1984) Neurons with graded response have collective computational properties like those of two-sate neurons. Proc. Nat. Acad. Sci. (USA) 81, 3088-3092.

Hopfield, J. J. and Tank, D. W. “Neural” computation of decisions in optimization problems. (1985) Biological Cybernetics 55, 141-146.

Hopfield, J. J. (2006) Searching for memories, Sudoku, implicit check-bits, and the iterative use of not-always-correct rapid neural computation. http://arxiv.org/abs/q-bio.NC/0609006

Seung, H. S. (1998) Continuous attractors and oculomotor control, Neural Networks 11, 1253-1258.

Sima, J. and Orponen, P, (2003) Continuous-time symmetric Hopfield nets are computationally universal, Network Computation 15, 693-733.

Internal references

  • John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815.
  • Chris Eliasmith (2007) Attractor network. Scholarpedia, 2(10):1380.
  • Geoffrey E. Hinton (2007) Boltzmann machine. Scholarpedia, 2(5):1668.
  • James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.
  • Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
  • Peter Jonas and Gyorgy Buzsaki (2007) Neural inhibition. Scholarpedia, 2(9):3286.
  • Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
  • Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
  • Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.

External Links

Author's webpage

See Also

Associative Memory, Attractor Network, Boltzmann Machine, Brain-State-in-a-Box, Dynamical Systems, Oscillatory Associative Memory, Recurrent Neural Networks, Simulated Annealing

Category:Computational Intelligence Category:Computational Neuroscience Category:Recurrent Neural Networks Category:Neural Networks Category:Eponymous

Clone this wiki locally