Skip to content

Compressive Sensing via Locally-Competitive Neural Network

Notifications You must be signed in to change notification settings

MichaelTeti/CS_LCA

Repository files navigation

LCA-CS

Compressed Sensing reconstruction using locally-competitive neural networks.

Background

Modern cameras/devices are wasteful of data, which can be expensive to collect and transmit. It doesn’t make sense to collect much more data than you need, then compress it. Compressive sensing (CS) combines sampling and compression in one step.

An image x with n pixels can be transformed into a n×1 column vector, where all the pixels are stacked on top of each other.

Figure 1: The image can be transformed into a column vector.


To take compressed samples of x, multiply it with a random m × n matrix A, where m << n to get compressed measurement vector b.

Figure 2: To sample, multiply x by a random m x n Gaussian matrix.


To get back x from b, you have to minimize the MSE between Ax and b plus the sum of x. We add the sum of x to the minimization problem because the correct solution is sparse (i.e. has a lot of zeros). Current methods require hundreds of lines of code to solve this problem. If only there was a more simple, faster way ...

Locally-Competitive Algorithms (LCAs) are a type of dynamical neural network that can be used to recover compressed signals using lateral inhibition like the human visual system. Upon receiving input, each node charges up in proportion to how much the input resembles the appropriate stimulus. If it charges up enough, it will "fire" and produce an output, as well as inhibit nearby, similar nodes (red arrows in Fig. 1) in proportion to its activation.

Figure 3: The network's weights change over time to minimize the reconstruction error.


We send A, x, and b to the network and, over time, it will settle on a sparse approximation of the original image.

Algorithm

lambda = 4.0
h = 0.005
u = zeros(n, 1)
While MSE is above some value:
u = u + h * (A' * (b − A*x) − u − x)
x=(u-sign(u) * (lambda)) * (abs(u)>(lambda))

The variable u is the input layer, h is a scale constant, and lambda is the threshold. As can be seen, the algorithm is extremely simple and efficient. Furthermore, it is a vectorized implementation, which enables the use of GPUs, making it much faster than alternative methods. The first line receives the input and minimizes the mean-squared error between Ax and b, while also causing the inhibition of nearby nodes. The second line is the activation function of each node, which causes nodes with activations below threshold to output nothing. In all, the network outputs a sparse solution that approximates, or sometimes equals, the original image.

Code

  • Sparse Vector - This is a code written in Tensorflow that uses the LCA-CS method to reconstruct a compressively sensed sparse vector.

  • Natural Image - This code shows how to compressively sense and recover a natural image that is not canonically sparse. Here, the sampling process is the same as those signals that are inherently sparse, but the recovery process includes a dictionary appropriate for the basis in which the signal is sparse. Here we use the DCT dictionary.

Results

Reconstructions of an image with different sampling rates.



Reconstruction of a natural image.



The locally-competitive algorithm method is able to approximately reconstruct the signal in almost 9 times less time than the current state-of-the-art method.

Further Reading

About

Compressive Sensing via Locally-Competitive Neural Network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published