Skip to content

Latest commit

 

History

History
21 lines (13 loc) · 1.53 KB

README.md

File metadata and controls

21 lines (13 loc) · 1.53 KB

Inverse Problems Project: Deconvolution of a Sparse Signal

see report.pdf for a more detailed description of the project

In this project, our main goal was to solve an inverse problem of the form y = Hx + n, with a sparse signal x, Gaussian noise n and convolution matrix H. To do so we will make use of simulation following probability laws and descent algorithms for convex optimization, which will be introduced in parts I and II before being used to solve the inverse problem in part III.

x and y signals

Part I

Simulation of a Gaussian law truncated to positive values through the cumulative distributive function inversion method and accept-reject methods.

Part II

Descent methods: gradient descent and Newton method.

Part III

Deconvolution of the sparse signal through convex optimization and Gibbs sampling (Bayesian setting).

deconvoluted signal

Concluding remarks

In this project, we solved the inverse problem y = Hx + n with sparse x. To do so, we used a slightly modified version of the descent algorithms seen in part II and we also developed a Gibbs sampler that used the algorithms seen in part I to generate the components of x. The descent and simulation following a probability law algorithms presented here could be used to solve various other inverse problems in different forms. The results were consistent with what was expected from the theoretical calculations and we presented means of choosing which method is best suited to a given problem.