This repository is a cleaned open-source layout for MFMSRNet: An Interpretable Multi-frequency and Multi-scale Riemannian Network for Motor Imagery Decoding.
Official PyTorch implementation of MFMSRNet for motor imagery EEG decoding using multi-frequency functional connectivity and multi-scale Riemannian learning.
MFMSRNet is an end-to-end geometry-aware framework that:
- constructs Kernelized Phase Locking Value (KPLV) FC matrices
- performs attention-based Riemannian fusion across frequencies
- extracts multi-scale connectivity patterns
- classifies MI EEG using SPD manifold learning
EEG → Band-pass → PLV → KPLV SPD matrices
↓
Riemannian Attention Fusion
↓
Multi-scale Riemannian Network
↓
Classifier
MFMSRNet/
├── configs/
│ ├── bciiv2a.yaml
│ └── llmi.yaml
├── scripts/
│ ├── train_bciiv2a.sh
│ ├── train_llmi.sh
│ ├── test_bciiv2a.sh
│ └── test_llmi.sh
├── src/mfmsrnet/
│ ├── data/
│ ├── engine/
│ ├── fc/
│ ├── models/
│ └── utils/
├── build_fc.py
├── train.py
├── test.py
├── requirements.txt
└── pyproject.toml
pip install -r requirements.txtOr install as a package:
pip install -e .Download from: https://www.bbci.de/competition/iv/
Example for BCI-IV 2a:
python build_fc.py --config configs/bciiv2a.yamlExample for LL-MI:
python build_fc.py --config configs/llmi.yamlpython train.py --config configs/bciiv2a.yaml
python train.py --config configs/llmi.yamlpython test.py --config configs/bciiv2a.yaml --checkpoint ./outputs/bciiv2a/best_model.pt
python test.py --config configs/llmi.yaml --checkpoint ./outputs/llmi/best_model.ptExample:
configs/bciiv2a.yaml
dataset: bciiv2a
num_classes: 4
bands:
- [8,13]
- [13,20]
- [20,30]
- [30,45]
sigma: 0.7
batch_size: 64
epochs: 200
lr: 1e-3
The main hyperparameters of MFMSRNet and recommended tuning ranges are:
| Parameter | Description | Recommended Range |
|---|---|---|
| sigma | Gaussian kernel width | 0.5 – 1.5 |
| reg | SPD regularization | 1e-7 – 1e-3 |
| bands | frequency bands | paper default recommended |
Recommended strategy:
- Fix bands as [8-13, 13-20, 20-30, 30-45]
- Grid search sigma
- Keep reg small but stable
Example:
sigma ∈ {0.5, 0.7, 1.0, 1.2}
| Parameter | Range |
|---|---|
| lr | 1e-4 – 1e-3 |
| batch_size | 32 – 128 |
| weight_decay | 1e-5 – 1e-3 |
| epochs | 150 – 300 |
Recommended:
- lr = 1e-3
- batch_size = 64
- weight_decay = 1e-4
| Parameter | Range |
|---|---|
| W_local | 2 – 4 |
| stride_local | 1 – 2 |
| hidden_dim | 128 – 512 |
Paper setting:
- W_local = 2
- stride_local = 2
- hidden_dim = 256
- The released FC construction implements the KPLV pipeline used in the paper: band-pass filtering, Hilbert phase extraction, PLV computation, distance mapping, kernelization, regularization, trace normalization, and symmetrization.
- The released model implements the shared method family used across both datasets: per-band SPD encoding, attention-based Riemannian fusion, BiSRe blocks, and multi-scale Riemannian feature extraction.
- Dataset-specific differences are handled by YAML configs instead of duplicated full scripts.
- You should replace the placeholder file paths in
configs/*.yamlwith your actual local dataset paths.
