Classification of simulated radio signals using Wide Residual Networks for use in the search for extra-terrestrial intelligence (SETI)
This is the repository for Classification of simulated radio signals using Wide Residual Networks for use in the search for extra-terrestrial intelligence.
The models were originally the winning entry in the ML4SETI Code Challenge competition, organized by the Search for ExtraTerrestrial Intelligence (SETI) Institute.
The objective of the ml4seti was to train a classifier to differentiate between the following signal types:
- brightpixel
- narrowband
- narrowbanddrd
- noise
- squarepulsednarrowband
- squiggle
- squigglesquarepulsednarrowband
Check out the ml4seti Github for more details.
The criterion to select the winning entry was the (multinomial) LogLoss of the model.
The training data contained 140,000 signals across the 7 classes. Our data preparation pipeline involved:
- Creating 5 stratified (equal class distribution) folds from the training data.
- The complex valued time series of each signal is divided into 384 chunks of 512 timesteps each.
- Hann Windows are applied.
- Fast Fourier Transform (FFT) is performed on the chunks.
- Two features are generated from the complex amplitudes -
- Log of Square of Absolute Value of Amplitude
- Phase
- Each resulting [384, 512, 2] tensor is normalized by the frequency-bin-wise mean and standard deviation across the entire training dataset.
All tensors were stored in HDF5 files, and batches are read directly from disk during model training.
For the winning entry, we used an averaged ensemble of 5 Wide Residual Networks, trained on different sets of 4(/5) folds, each with a depth of 34 (convolutional layers) and a widening factor of 2.
In the above figure, the architecture of each BasicBlock is as follows --
In the interest of time, we used a batch size of 96 and an aggressive learning rate schedule. Starting at 0.1, we halved the learning rate when the model did not improve for 3 consecutive epochs, and terminated training when it failed to improve for 8 consecutive epochs.
The validation accuracies on each of the folds (having trained on the other 4 folds) is as follows:
Validation Fold | Validation Accuracy |
---|---|
Fold 1 | 95.88% |
Fold 2 | 95.74% |
Fold 3 | 95.79% |
Fold 4 | 95.65% |
Fold 5 | 95.77% |
For the final submission, each of these 5 fold-models were run on the test dataset, and their scores averaged.
Dependencies:
pytorch 0.1.12
torchvision 0.1.8
pandas
h5py
scikit-learn
To evaluate the model(s) and/or to reproduce our results on the Final Test Set:
-
Download the signal files and the corresponding CSV with the
UUID
column into a single folder. -
In
./folds/create_h5_tensors.py
, run thecreate_test_tensors_hdf5_logmod2_ph()
function, while pointing to your raw signal data. This will create an hdf5 file with the test tensor. -
Run
test.py
with the architecture name, checkpoint, test hdf5 file, and the hdf5 file containing the mean and standard deviation used for normalizing. Do this for all 5 folds. For example, for fold 1:python test.py 'wresnet34x2' './wresnet34x2 models/wresnet34x2 FOLD1/FOLD1_BEST_wresnet34x2_batchsize96_checkpoint.pth.tar' 'path/to/your/test/hdf5' './folds/mean_stddev_primary_full_v3__384t__512f__logmod2-ph.hdf5'
.The CSVs with the scores for each fold-model will be saved in the same folder as
test.py
.Note: If you don't have a CUDA-enabled GPU, the code will need to be modified to support CPU. Reach out to us on the ml4seti Slack channel if you need help with this.
-
Move these CSVs to a separate folder, and run
average_scores.py
by pointing to this folder, and specifying the path for the output CSV:python average_scores.py 'path/to/folder/with/individual/model/scores' 'path/to/output/csv.csv'