Paper: https://arxiv.org/pdf/2208.12266.pdf
Works for Gwilliams2022 dataset and Brennan2018 dataset.
- Full reproducibility support. Will be useful for HP tuning.
- Match accuracy to numbers reported in the paper.
- Work with huge memory consumption issue in Gwilliams multiprocessing
Run python train.py dataset=Brennan2018 rebuild_datasets=True
.
When rebuild_datasets=False
, existing pre-processed M/EEG and pre-computing embeddings are used. This is useful if you want to run the model on exactly the same data and embeddings several times. Otherwise, the both audio embeddings are pre-computed and M/EEG data are pre-processed before training begins.
Run python train.py dataset=Gwilliams2022 rebuild_datasets=True
When rebuild_datasets=False
, existing pre-processed M/EEG and pre-computing embeddings are used. This is useful if you want to run the model on exactly the same data and embeddings several times. It takes ~30 minutes to pre-process Gwilliams2022 and compute embeddings on 20 cores. Set rebuild_datasets=False
for subsequent runs (or don't specify it, becuase by default rebuild_datasets=False
). Otherwise, the both audio embeddings are pre-computed and M/EEG data are pre-processed before training begins.
To do that, set entity
and project
in the wandb
section of config.yaml
.
Gwilliams et al., 2022
-
Dataset https://osf.io/ag3kj/
Brennan et al., 2019
-
Paper https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0207741
-
Dataset https://deepblue.lib.umich.edu/data/concern/data_sets/bg257f92t
You will need S01.mat
to S49.mat
placed under data/Brennan2018/raw/
and audio.zip
unzipped to data/Brennan2018/audio/
to run the code.