Replication a series of networks completely training and inferencing over spectral/frequency domain. Implemented in Pytorch.
In this part, I mainly refer to following works:
Bochen Guan, Jinnian Zhang, William A. Sethares, Richard Kijowski, Fang Liu.
SPECTRAL DOMAIN CONVOLUTIONAL NEURAL NETWORK
2021 ICASSP.
Oren Rippel, Jasper Snoek, Ryan P. Adams.
Spectral Representations for Convolutional Neural Networks
2015 NIPS.
The current implementation supports following operations in frequency domain
- FFT layer & IFFT layer
- Spectral Convolution (Elementwise production)
- Spectral Pooling
- Spectral Activation (Tanh)
- Spectral Normalization
Current inplementation only includes once FFT and once IFFT in the whole structure. However, this is still not efficiently and memory saving inplemented.
First, the inplmentation of spectral kernel is relized by such: saving in the spatial domain and using FFT to transform it into frequency domain and do the forwarding. This results in overheads of doing FFT over conv kernel.
Second, the parameter is also not memory saving. Because of the conjactory feature of spectral graph, we do not need to save
As a result, in this part, I mainly refer to
Jong Hwan Ko, Burhan Mudassar, Taesik Na, and Saibal Mukhopadhyay.
Design of an Energy-Efficient Accelerator for Training of Convolutional Neural Networks using Frequency-Domain Computation
2017 DAC.
Todo
- Sinc Interpolation
- Hermitian Symmetry
Currently, over MNIST can achieve close convergence to spatial network, but costing much more memory and latency.
Hyperparameter: Both Adam, LR=0.001, batch size 512