- python == 3.8.5
- torch == 1.8.1
- numpy == 1.20.1
- scipy == 1.6.1
- mne == 0.22.0
- scikit-learn == 0.23.2
- pyriemann == 0.2.6
- Please manually download the datasets BNCI2014001, BNCI2014002, BNCI2014004 by MOABB.
- bci: common approaches in BCIs:
- bsfda: black-box source model for source-free domain adaptation:
- Source: source only
- Source HypOthesis Transfer (SHOT-IM, SHOT)
- ASFA, ASFA-aug: our proposed approach, ASFA-aug add data augmentation when performing knowledge distillation
- libs: public function used in this project:
- augment: augment functions
- cdan, dan, dann, grl, jan, kernel: files for existing unsupervised domain adaptation approaches, code from https://github.com/thuml/Transfer-Learning-Library
- dataLoad: load and compute tangent space features for EEG data
- DataIterator: data iterator when training deep networks
- network, eegnet, deepconvent, DomainDiscriminator: model definition
- loss: loss functions
- utils: common used functions
- sfda: approaches for source-free domain adaptation:
- Source: source only
- BAIT
- Source HypOthesis Transfer (SHOT-IM, SHOT)
- ASFA: our proposed approach
- uda: approaches for unsupervised domain adaptation:
- Conditional domain adversarial network (CDAN/CDAN-E)
- Domain adaptation network (DAN)
- Domain-adversarial neural network (DANN)
- Joint adaptation netowrk (JAN)
- Minimum class confusion (MCC)
When you have prepared the datasets, you can directly run the corresponding .py file.
For example,
cd ASFA
python sfda/ASFA.py --gpu_id '0' --device 'cuda' --fileroot your_data_file_path --output ASFA
If you find this code useful for your research, please cite our papers
@article{XiaASFA2022,
title={Privacy-preserving domain adaptation for motor imagery-based brain-computer interfaces},
author={Kun Xia and Lingfei Deng and Wlodzislaw Duch and Dongrui Wu},
journal={IEEE Trans. on Biomedical Engineering},
year={2022},
vol={69},
no={11},
pages={3365-3376}
}