This repository supplements the paper "Fast Single-Snapshot Harmonic Recovery with 2D Sparse Arrays using BCCB Matrices". We implement computationally efficient versions of the ISTA, FISTA, and ADMM algorithms for the task of harmonic recovery using 2D sparse arrays. The fast implementations are based upon the use of 2D FFTs to efficiently compute matrix-vector products in which the matrices involved exhibit a BCCB structure. The reported runtime of the fast implementations is up to 100x lower compared to their regular counterpart.
Below is a guided walkthrough to replicate the experimental results.
We first setup the repo and environment.
git clone https://github.com/youvalklioui/sparse-2d-array-bccb.git
cd sparse-2d-array-bccb
conda env create -f environment.yml
conda activate sparse-2d-array-bccb_env
We first generate a 2D sparse array by specifying the maximum apertures x_max_aperture
and y_max_aperture
(in integer units of λ/2) in the x and y directions, respectively, as well as the number of elements num_elements
of the 2D sparse array.
python main.py create-2d-array \
--num_elements 20 \
--x_max_aperture 15 \
--y_max_aperture 30
The example above will randomly generate a 2D sparse array of 20 elements with an aperture of 15λ/2 in the x direction and 30λ/2 in the y direction. The array coordinates and geometrical characteristics will be saved in a array_20elem_15dx_30dy.pt
file under ./dictionaries/array_20elem_15dx_30dy
.
With the array geometry set, we proceed to the generation of a set of dictionaries of different sizes of the sparse recovery problem. We must specify two lists of subdictionary lengths, namely subdictionary_lengths_f1
and subdictionary_lengths_f2
, in addition to the path array_path
of the array to be used in the dictionary generation. For each combination of values (L1, L2) from subdictionary_lengths_f1
and subdictionary_lengths_f2
, a dictionary Ds of shape M by L1xL2 will be generated, where M is equal to the value of num_elements
.
python main.py create-dictionaries \
--array_path './dictionaries/array_20elem_15dx_30dy/array_20elem_15dx_30dy.pt' \
--subdictionary_lengths_f1 32 64 \
--subdictionary_lengths_f2 16 32
Using the array specified in array_path
, the command above will generate a list of four dictionaries by using every combination from the two lists of lengths subdictionary_lengths_f1
and subdictionary_lengths_f2
. The four dictionaries are lumped into a nested dictionary structure that is indexable with two keys and is saved in a dictionaries_32_to_64L1__16_to_32L2.pt
file under dictionaries/array_20elem_15dx_30dy
.
We can then benchmark the runtime of the fast implementation by specifying the path to the previously generated dictionaries dictionaries/array_40elem_50dx_15dy/dictionaries_32_to_64L1__16_to_32L2.pt
, a list niter
of the number of iterations at which the the runtime is recorded for both the regular and fast implementation of ISTA, FISTA, and ADMM. We can additionally set the signal-to-noise ratio level (SNR) and the number of trials over which an average value of the runtime is computed by specifying snr
and ntrials
, respectively.
python main.py run-experiment \
--dictionaries_path 'dictionaries/array_40elem_50dx_15dy/dictionaries_32_to_64L1__16_to_32L2.pt' \
--niter 50 100 \
--snr 15.0 \
--ntrials 10
The command above will, using each dictionary in dictionaries_32_to_64L1__16_to_32L2.pt
, record the average value over ntrials
of the runtime needed to perform each iteration number listed in niter
for both the regular and fast implementations of ISTA, FISTA, and ADMM, as well as the relative error experiment_15.0dbsnr_10trials_50_100_iters.pt
file under outputs/experiment_15.0dbsnr_10trials_50_100_iters
.