conda create SleepBench python=3.10
conda activate SleepBench
cd pretrain_comparison
pip install -r requirements.txtPlease download the data from: <LINK GIVEN IN OPENREVIEW>
please run: cd /pretrain_comparison
python init.py --data_path </your/data/path>Pretraining should be done on two NVIDIA A100 80 GB GPUs for optimal compatibility. If using fewer or smaller GPUs, adjust the batch size accordingly.
Navigate to pretrain_comparison/comparison/pipeline and run one of the following based on your model:
torchrun --nproc_per_node=1 --master_port=29501 main_cl.py --config /oak/stanford/groups/mignot/projects/SleepBenchTest/pretrain_comparison/comparison/config/config_CL.yaml
main_cl.py
Toggle between
CL LOOandCL Pairwiseinpretrain_comparison/comparison/config/config_CL.yamlunder themodelfield.
main_MAE.pyβ MAE (Time, all patches) use config:pretrain_comparison/comparison/config/config_multimodalMAE.yamlmain_masked.pyβ MAE (Time, masked patches) use config:pretrain_comparison/comparison/config/config_multimodalMAE_masked.yamlmain_fft.pyβ MAE (Freq, all patches) use config:pretrain_comparison/comparison/config/config_fft_MAE.yamlmain_fft_masked.pyβ MAE (Freq, masked patches) use config:pretrain_comparison/comparison/config/config_fft_MAE_masked.yaml
main_noise.pyβ DAE (Time) use config:pretrain_comparison/comparison/config/config_noise.yamlmain_fft_noise.pyβ DAE (Freq) use config:pretrain_comparison/comparison/config/config_fft_noise.yaml
To generate embeddings, navigate to pretrain_comparison/comparison/pipeline and select models using the models_list in the inital lines of the script then run:
python gen_embeddings_cl.py
gen_embeddings_cl.pyβ for CL subtypesgen_embeddings.pyβ for MAE and DAE subtypes
Fine-tuning is optimized for a single NVIDIA A100 80 GB GPU. Adjust batch size if using a different setup.
Update set the folder name of the pretrain_type in the /pretrain_comparison/fine_tune/config_fine_tune.yaml. You can find folders of the models you have made embeddings for in pretrain_comparison/output/final_embeddings folder. Note that there should only be a single run for any pretraining type (i.e. only one folder pretrain_comparison/output/final_embeddings/<pretrain_type>/).
Then, open and run the appropriate notebook:
fine_tune/ahi_diagnosis/fine_tune_ahi.ipynbfine_tune/sleep_stage_and_age/fine_tune_sleep_stage_and_age.ipynbfine_tune/death_and_diagnosis/fine_tune_diagnosis.ipynb
To evaluate the performance please cd into /pretrain_comparison/evaluation and run the notebooks for the results you want to generate. Corresponding files will be generated in /pretrain_comparison/output/results