In our day to day lives, we take a lot of pictures on our phones everyday. But so many times, they are not of a quality good enough. A shaky hand and the image blurs like taken on a 2 mega Pixel camera. Images being blur is a very common thing and we don't really have any effective way of de-blurring them.
So, [Vidhu Joshi][2] and I experimented for weeks to make a Neural netowrk that could even remote address this issue. We used the famous [UNet][1] architecture (brilliant piece of work by the way) as our base netowrk to deblur the images. The network basically extracts the important features of the image while reducing its spacial features and then Up-Sampling those compact features aka recreating the original sized image but de-blurred.
If anyone sees this of any interest for further research, please reach out to us anytime.
As we go along trying new stuff in new python notebooks, we keep adding them to the model_exps
directory and name them try1
, try2
and so o.
At the top of each notebook is marked down specific parameters that were used in that particular experiment (eg. Epochs, depth of netowrk,
loss functions,etc) as well as the results fetched (eg. loss value). The data
direcotry has not been uploaded due to our personal
reasons but its structure is as follows:
The best_models
directory contains the all the saved models (in keras Saved Model format) that fetched results which were interested in
any way.