Skip to content

[CVPR 2024 Highlight] Official PyTorch implementation of "MindBridge: A Cross-Subject Brain Decoding Framework"

Notifications You must be signed in to change notification settings

littlepure2333/MindBridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MindBridge: A Cross-Subject Brain Decoding Framework

teasor

Shizun Wang, Songhua Liu, Zhenxiong Tan, Xinchao Wang
National University of Singapore

CVPR 2024 Highlight
Project | Arxiv

News

[2024.04.12] MindBridge's paper, project and code are released.
[2024.04.05] MindBridge is selected as CVPR 2024 Highlight paper!
[2024.02.27] MindBridge is accepted by CVPR 2024!

Overview

method

We present a novel approach, MindBridge, that achieves cross-subject brain decoding by employing only one model. Our proposed framework establishes a generic paradigm capable of addressing these challenges: 1) the inherent variability in input dimensions across subjects due to differences in brain size; 2) the unique intrinsic neural patterns, influencing how different individuals perceive and process sensory information; 3) limited data availability for new subjects in real-world scenarios hampers the performance of decoding models. Notably, by cycle reconstruction, MindBridge can enable novel brain signals synthesis, which also can serve as pseudo data augmentation. Within the framework, we can adapt a pretrained MindBridge to a new subject using less data.

Installation

  1. Agree to the Natural Scenes Dataset's Terms and Conditions and fill out the NSD Data Access form

  2. Download this repository: git clone https://github.com/littlepure2333/MindBridge.git

  3. Create a conda environment and install the packages necessary to run the code.

conda create -n mindbridge python=3.10.8 -y
conda activate mindbridge
pip install -r requirements.txt

Preparation

Data

Download the essential files we used from NSD dataset, which contains nsd_stim_info_merged.csv. Also download COCO captions from this link which contains captions_train2017.json and captions_val2017.json. We use the same preprocessed data as MindEye's, which can be downloaded from Hugging Face, and extract all files from the compressed tar files. Then organize the data as following:

Data Organization
data/natural-scenes-dataset
├── nsddata
│   └── experiments
│       └── nsd
│           └── nsd_stim_info_merged.csv
├── nsddata_stimuli
│   └── stimuli
│       └── nsd
│           └── annotations
│              ├── captions_train2017.json
│              └── captions_val2017.json
└── webdataset_avg_split
    ├── test
    │   ├── subj01
    │   │   ├── sample000000349.coco73k.npy
    │   │   ├── sample000000349.jpg
    │   │   ├── sample000000349.nsdgeneral.npy
    │   │   └── ...
    │   └── ...
    ├── train
    │   ├── subj01
    │   │   ├── sample000000300.coco73k.npy
    │   │   ├── sample000000300.jpg
    │   │   ├── sample000000300.nsdgeneral.npy
    │   │   └── ...
    │   └── ...
    └── val
        ├── subj01
        │   ├── sample000000000.coco73k.npy
        │   ├── sample000000000.jpg
        │   ├── sample000000000.nsdgeneral.npy
        │   └── ...
        └── ...

Checkpoints

You can download our pretrained MindBridge checkpoints for "subject01, 02, 05, 07" from Hugging Face. And place the folders containing checkpoints under the directory ./train_logs/.

Training

The training commands are described in the ./scripts folder. You can check the command options in the ./src/options.py file. For example, you can resume training by adding the --resume option to the command. The training progress can be monitored through wandb.

Training on single subject

This script contains training the per-subject-per-model version of MindBridge (which refers to "Vanilla" in the paper) on one subject (e.g. subj01). You can also indicate which subject in the script.

bash scripts/train_single.sh

Training on multi-subjects

This script contains training MindBridge on multi-subjects (e.g. subj01, 02, 05, 07). You can also indicate which subjects in the script.

bash scripts/train_bridge.sh

Adapting to a new subject

Once the MindBridge is trained on some known "source subjects" (e.g. subj01, 02, 05), you can adapt the MindBridge to a new "target subject" (e.g. subj07) based on limited data volume (e.g. 4000 data points). You can also indicate which source subjects, which target subject, or data volume (length) in the script.

bash scripts/adapt_bridge.sh

Reconstructing and evaluating

This script will reconstruct one subject's images (e.g. subj01) on the test set from a MindBridge model (e.g. subj01, 02, 05, 07), then calculate all the metrics. The evaluated metrics will be saved in a csv file. You can indicate which MindBridge model and which subject in the script.

bash scripts/inference.sh

TODO List

  • Release pretrained checkpoints.
  • Training MindBridge on all 8 subjects in NSD dataset.

Citation

@inproceedings{wang2024mindbridge,
  title={Mindbridge: A cross-subject brain decoding framework},
  author={Wang, Shizun and Liu, Songhua and Tan, Zhenxiong and Wang, Xinchao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11333--11342},
  year={2024}
}

Acknowledgement

We extend our gratitude to MindEye and nsd_access for generously sharing their codebase, upon which ours is built. We are indebted to the NSD dataset for providing access to high-quality, publicly available data. Our appreciation also extends to the Accelerate and DeepSpeed for simplifying the process of efficient multi-GPU training, enabling us to train on the 24GB vRAM GPU, NVIDIA A5000. Special thanks to Xingyi Yang and Yifan Zhang for their invaluable discussions.

galaxy brain

About

[CVPR 2024 Highlight] Official PyTorch implementation of "MindBridge: A Cross-Subject Brain Decoding Framework"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published