Compressed Sensing with Deep Image Prior
This repository provides code to reproduce results from the paper: Compressed Sensing with Deep Image Prior and Learned Regularization.
Here are a few example results:
|MNIST at 75 measurements||X-ray at 2000 measurements|
Clone the repository
$ git clone https://github.com/davevanveen/compsensing_dip.git $ cd compsensing_dip
Please run all commands from the root directory of the repository, i.e from
$ pip install -r requirements.txt
Plotting reconstructions with existing data
Open jupyter notebook of plots
$ jupyter notebook plot.ipynb
Set variables in the second cell according to interest, e.g.
ALG_LIST. Existing supported data is described in the comments.
Execute cells to view output.
Generating new reconstructions on the MNIST, xray, or retinopathy datasets
Execute the baseline command
$ python comp_sensing.py
which will run experiments with the default parameters specified in
To generate reconstruction data according to user-specified parameters, add command line arguments according to those available in
$ python comp_sensing.py --DATASET xray --NUM_MEASUREMENTS 2000 4000 8000 --ALG csdip dct
Running CS-DIP on a new dataset
- Create a new directory
/data/dataset_name/sub/which contains your images
utils.py, create a new DCGAN architecture. This will be similar to the pre-defined architectures, e.g.
DCGAN_XRAY, but must have output dimension equal to the size of your new images. Output dimension can be changed by adjusting kernel_size, stride, and padding as discussed in the torch.nn documentation.
configs.jsonto set parameters for your dataset. Update
utils.init_dcganto import/initiate the corresponding DCGAN.
- Generate and plot reconstructions according to instructions above.
Note: We recommend experimenting with the DCGAN architecture and dataset parameters to obtain the best possible reconstructions.
Generating learned regularization parameters for a new dataset
The purpose of this section is to generate a new (\mu, \Sigma) based on layer-wise weights of the DCGAN. This functionality will be added soon.