Skip to content

Official pytorch implementation of Cross Modality Knowledge Distillation between A-mode Ultrasound and Surface Electromyography.

Notifications You must be signed in to change notification settings

increase24/CMKD-MINDS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CMKD-MINDS: Learning an Augmented sEMG Representation via Cross-modality Knowledge Distillation

Python 3.7 PyTorch

This is an official pytorch implementation of Cross Modality Knowledge Distillation between A-mode ultrasound and surface electromyography

Environment

The code is developed using python 3.7 on Ubuntu 18.04. NVIDIA GPU is needed.

Data preparing

The complete hybrid sEMG/AUS dataset is not released now. We apply collected sEMG/AUS data of one subject for code testing, which can be downloaded from: Baidu Disk (code: h99k).

Your directory tree should look like this:

${ROOT}/data
├── EMG
|   |—— s1_***_EMG.txt
|   |—— s2_***_EMG.txt
|   |   ...
|   └── s8_***_EMG.txt
└── US
    |—— s1_***_US.txt
    |—— s2_***_US.txt
    |   ...
    └── s8_***_US.txt

Usage

Installation

  1. Clone this repo
  2. Install dependencies:
    pip install -r requirements.txt
    

Training

For training a network on single sEMG or AUS modality, run the script tools/train.py with configuration of model and modality. For instance:

# train network MINDS on sEMG modality
python ./tools/train.py --config "./configs/USEMG_single.yaml" --modelName "MINDS" --modality "EMG"

# train network EUNet on US modality
python ./tools/train.py --config "./configs/USEMG_single.yaml" --modelName "EUNet" --modality "US"

Evaluation

For validating a network on single sEMG or AUS modality, run the script tools/test.py with configuration of model and modality. For instance:

# test network MINDS on sEMG modality
python ./tools/test.py --config "./configs/USEMG_single.yaml" --modelName "MINDS" --modality "EMG"

# test network EUNet on US modality
python ./tools/test.py --config "./configs/USEMG_single.yaml" --modelName "EUNet" --modality "US"

Knowlwedge distillation

We take MKCNN(US)-distill-MKCNN(EMG) as an example.

# Firstly train MKCNN network on US modality to obtain the teacher network weights
python ./tools/train.py --config "./configs/USEMG_single.yaml" --modelName "MKCNN" --modality "US"

# Secondly apply MKCNN(US) as a teacher, to guide the training of student network MKCNN(EMG)
python ./tools/train_cmkd.py --config configs/USEMG_cmkd.yaml --model_us MKCNN --model_emg MKCNN --alpha 0.8 --T 20

Results

result
Figure 1. The accuracy boxplots of five classifiers on two modalities: (a) sEMG modality, (b) AUS modality. The Wilcoxon signed-rank test was applied to compare the proposed network MINDS with other models. The one asterisk "" and two asterisks "**" denote 0.01 ≤ p < 0.05 and p < 0.01 respectively.*
Model sEMG(w.o. KD) sEMG(w. KD) $H_0$ (p-value)
Multi-stream CNN 74.62 ± 6.68 75.48 ± 6.9 0 (0.0156)
EUNet 79.59 ± 6.08 81.16 ± 6.11 0 (0.0078)
MKCNN 82.69 ± 4.94 84.59 ± 5.36 0 (0.0078)
XceptionTime 88.30 ± 4.60 89.06 ± 4.82 0 (0.0234)
MINDS (ours) 89.05 ± 4.71 90.06 ± 4.52 0 (0.0078)

The accuracies comparison of sEMG modality with knowledge distillation ("sEMG(w. KD)") and without knowledge distillation ("sEMG(w.o. KD)"). The Wilcoxon signed rank test is applied to verify the significance of the improvement obtained by knowledge distillation. The null hypothesis is rejected when $H_0 = 0$ ($p&lt; 0.05$).

Citation

If you find this repository useful for your research, please cite with:

@article{zeng2022cross,
  title={Cross Modality Knowledge Distillation Between A-Mode Ultrasound and Surface Electromyography},
  author={Zeng, Jia and Sheng, Yixuan and Yang, Yicheng and Zhou, Ziliang and Liu, Honghai},
  journal={IEEE Transactions on Instrumentation and Measurement},
  volume={71},
  pages={1--9},
  year={2022},
  publisher={IEEE}
},

@inproceedings{zeng2020feature,
  title={Feature fusion of sEMG and ultrasound signals in hand gesture recognition},
  author={Zeng, Jia and Zhou, Yu and Yang, Yicheng and Wang, Jiaole and Liu, Honghai},
  booktitle={2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
  pages={3911--3916},
  year={2020},
  organization={IEEE}
}

Contact

If you have any questions, feel free to contact me through jia.zeng@sjtu.edu.cn or Github issues.

About

Official pytorch implementation of Cross Modality Knowledge Distillation between A-mode Ultrasound and Surface Electromyography.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published