Skip to content
TYashwanthReddy edited this page Nov 24, 2017 · 3 revisions

Self organizing map is an unsupervised learning algorithm used in reducing the dimensions of data and clustering data. It maintains a topological relationship among the clusters (i.e. similar regions are clustered close to each other).

Architecture of an SOM

Neurons or nodes are organized in an array (or lattice), which is typically 1-D or 2-D. All the neurons are connected fully to the input vector. There are no lateral connections among the neurons. Each node has a unique topological position and a weight vector having same dimension as the input vector.

From the context of image compression using vector quantization, n is the dimension of each image block. The number of neurons in the lattice is the number of vectors in the codebook.

Algorithm

  1. Weight initialization
    Each node’s weight is initialized based on the type of data we use.

  2. Calculate the Best Matching Unit (BMU)
    Find the weight vector that is closest to the input pattern x. The corresponding neuron is called winner neuron. We commonly use Euclidean distance to find the closest vector.
    eqn1
    where D is the euclidean distance.

  3. Determine the neighborhood of the winning neuron This algorithm features the reduction of neighborhood area with time (or iterations). This decay is exponential.
    eqn2
    where t=1,2,3,.., = initial width of the lattice and = time constant.
    Initially, the neighborhood contains most of the nodes of the lattice. Eventually, the neighborhood shrinks to the BMU itself. The amount of influence a node's distance from the BMU has on its learning is given by
    eqn3
    where dist = euclidean distance between a node and the BMU.

  4. Weight adaptation
    The weight of nodes in the neighborhood region of the BMU is updated as follow.

    where t=iteration number and = learning rate.
    Learning rate decreases with time as follow.

    where is the initial learning rate.

This algorithm can be expressed in terms of encoder-decoder model used in vector quantization. At encoding, we find out the Best Matching Unit (BMU) or the winner neuron. In the lattice of neurons, each neuron is assigned a codeword. The codeword corresponding to the winning neuron is transmitted to the decoder. At decoding, we have to find the reconstruction vector from the received codeword. This reconstruction vector is the weight vector corresponding to the winning neuron.

Clone this wiki locally