Skip to content

Commit

Permalink
Updating to MOABB 1.0.0 (#553)
Browse files Browse the repository at this point in the history
* fix: switch setup/requirements.txt to pyproject.toml

* enh: add entry for PR 553

* fix: add docstring_inheritance dep

* fix, changing for underscore because of setup lib

* fix adding tool.setuptools

* fix updating the test

* fix setup tool on python 3.11

* fix updating tutorial

* adding support for python 3.11

* flake8

* fixing the tutorial

* fixing the tutorial

---------

Co-authored-by: Sylvain Chevallier <sylain.chevallier@universite-paris-saclay.fr>
Co-authored-by: bruAristimunha <a.bruno@aluno.ufabc.edu.br>
  • Loading branch information
3 people committed Oct 27, 2023
1 parent 91bfb4c commit 7263a4c
Show file tree
Hide file tree
Showing 16 changed files with 134 additions and 133 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/docs.yml
Expand Up @@ -13,13 +13,13 @@ jobs:
steps:
## Install Braindecode
- name: Checking Out Repository
uses: actions/checkout@v2
uses: actions/checkout@v4
# Cache MNE Data
# The cache key here is fixed except for os
# so if you download a new mne dataset in the code, best to manually increment the key below
- name: Create/Restore MNE Data Cache
id: cache-mne_data
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: ~/mne_data
key: ${{ runner.os }}-v3
Expand All @@ -38,7 +38,7 @@ jobs:
- run: python -c "import braindecode; print(braindecode.__version__)"

- name: Checking Out Repository
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Create Docs
run: |
cd docs
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/tests.yml
Expand Up @@ -15,17 +15,17 @@ jobs:
fail-fast: false
matrix:
os: [ "ubuntu-latest", "macos-latest", "windows-latest" ]
python-version: ["3.8", "3.9", "3.10"]
python-version: ["3.8", "3.9", "3.10", "3.11"]
steps:
## Install Braindecode
- name: Checking Out Repository
uses: actions/checkout@v2
uses: actions/checkout@v4
# Cache MNE Data
# The cache key here is fixed except for os
# so if you download a new mne dataset in the code, best to manually increment the key below
- name: Create/Restore MNE Data Cache
id: cache-mne_data
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: ~/mne_data
key: ${{ runner.os }}-v3
Expand Down
1 change: 1 addition & 0 deletions docs/whats_new.rst
Expand Up @@ -57,6 +57,7 @@ Enhancements
- Add basic training example with MNE epochs (:gh:`539` by `Pierre Guetschel`_)
- Log validation accuracy in :class:`braindecode.EEGClassifier` (:gh:`541` by `Pierre Guetschel`_)
- Better type hints in :mod:`braindecode.augmentation.base` (:gh:`551` by `Valentin Iovene`_)
- Support for MOABB 1.0.0 and switch to pyproject.toml (:gh:`553` by `Sylvain Chevallier`_)

Bugs
~~~~
Expand Down
5 changes: 2 additions & 3 deletions examples/advanced_training/plot_data_augmentation.py
Expand Up @@ -89,9 +89,8 @@
#

splitted = windows_dataset.split('session')
train_set = splitted['session_T']
valid_set = splitted['session_E']

train_set = splitted['0train'] # Session train
valid_set = splitted['1test'] # Session evaluation
######################################################################
# Defining a Transform
# --------------------
Expand Down
4 changes: 2 additions & 2 deletions examples/advanced_training/plot_data_augmentation_search.py
Expand Up @@ -123,8 +123,8 @@


splitted = windows_dataset.split('session')
train_set = splitted['session_T']
eval_set = splitted['session_E']
train_set = splitted['0train'] # Session train
eval_set = splitted['1test'] # Session evaluation

######################################################################
# Defining a list of transforms
Expand Down
2 changes: 1 addition & 1 deletion examples/datasets_io/plot_split_dataset.py
Expand Up @@ -48,7 +48,7 @@

splits = dataset.split("run")
print(splits)
splits["run_4"].description
splits["4"].description

###############################################################################
# By row index
Expand Down
38 changes: 19 additions & 19 deletions examples/model_building/plot_bcic_iv_2a_moabb_cropped.py
Expand Up @@ -88,7 +88,7 @@
from braindecode.datasets import MOABBDataset

subject_id = 3
dataset = MOABBDataset(dataset_name="BNCI2014001", subject_ids=[subject_id])
dataset = MOABBDataset(dataset_name="BNCI2014_001", subject_ids=[subject_id])

from numpy import multiply

Expand All @@ -107,17 +107,19 @@
factor = 1e6

preprocessors = [
Preprocessor('pick_types', eeg=True, meg=False, stim=False), # Keep EEG sensors
Preprocessor('pick_types', eeg=True, meg=False, stim=False),
# Keep EEG sensors
Preprocessor(lambda data: multiply(data, factor)), # Convert from V to uV
Preprocessor('filter', l_freq=low_cut_hz, h_freq=high_cut_hz), # Bandpass filter
Preprocessor(exponential_moving_standardize, # Exponential moving standardization
Preprocessor('filter', l_freq=low_cut_hz, h_freq=high_cut_hz),
# Bandpass filter
Preprocessor(exponential_moving_standardize,
# Exponential moving standardization
factor_new=factor_new, init_block_size=init_block_size)
]

# Transform the data
preprocess(dataset, preprocessors, n_jobs=-1)


######################################################################
# Create model and compute windowing parameters
# ---------------------------------------------
Expand All @@ -135,7 +137,6 @@

input_window_samples = 1000


######################################################################
# Now we create the model. To enable it to be used in cropped decoding
# efficiently, we manually set the length of the final convolution layer
Expand Down Expand Up @@ -181,15 +182,13 @@
if cuda:
_ = model.cuda()


######################################################################
# And now we transform model with strides to a model that outputs dense
# prediction, so we can use it to obtain predictions for all
# crops.
#
model.to_dense_prediction_model()


######################################################################
# To know the models’ output shape without the last layer, we calculate the
# shape of model output for a dummy input.
Expand Down Expand Up @@ -227,7 +226,6 @@
preload=True
)


######################################################################
# Split the dataset
# -----------------
Expand All @@ -236,9 +234,8 @@
#

splitted = windows_dataset.split('session')
train_set = splitted['session_T']
valid_set = splitted['session_E']

train_set = splitted['0train'] # Session train
valid_set = splitted['1test'] # Session evaluation

######################################################################
# Training
Expand Down Expand Up @@ -285,7 +282,8 @@
iterator_train__shuffle=True,
batch_size=batch_size,
callbacks=[
"accuracy", ("lr_scheduler", LRScheduler('CosineAnnealingLR', T_max=n_epochs - 1)),
"accuracy",
("lr_scheduler", LRScheduler('CosineAnnealingLR', T_max=n_epochs - 1)),
],
device=device,
classes=classes,
Expand All @@ -294,7 +292,6 @@
# in the dataset.
_ = clf.fit(train_set, y=None, epochs=n_epochs)


######################################################################
# Plot Results
# ----------------
Expand All @@ -309,7 +306,8 @@
from matplotlib.lines import Line2D

# Extract loss and accuracy values for plotting from history object
results_columns = ['train_loss', 'valid_loss', 'train_accuracy', 'valid_accuracy']
results_columns = ['train_loss', 'valid_loss', 'train_accuracy',
'valid_accuracy']
df = pd.DataFrame(clf.history[:, results_columns], columns=results_columns,
index=clf.history[:, 'epoch'])

Expand All @@ -319,7 +317,8 @@

fig, ax1 = plt.subplots(figsize=(8, 3))
df.loc[:, ['train_loss', 'valid_loss']].plot(
ax=ax1, style=['-', ':'], marker='o', color='tab:blue', legend=False, fontsize=14)
ax=ax1, style=['-', ':'], marker='o', color='tab:blue', legend=False,
fontsize=14)

ax1.tick_params(axis='y', labelcolor='tab:blue', labelsize=14)
ax1.set_ylabel("Loss", color='tab:blue', fontsize=14)
Expand All @@ -335,12 +334,13 @@

# where some data has already been plotted to ax
handles = []
handles.append(Line2D([0], [0], color='black', linewidth=1, linestyle='-', label='Train'))
handles.append(Line2D([0], [0], color='black', linewidth=1, linestyle=':', label='Valid'))
handles.append(
Line2D([0], [0], color='black', linewidth=1, linestyle='-', label='Train'))
handles.append(
Line2D([0], [0], color='black', linewidth=1, linestyle=':', label='Valid'))
plt.legend(handles, [h.get_label() for h in handles], fontsize=14)
plt.tight_layout()


######################################################################
# Plot Confusion Matrix
# ---------------------
Expand Down
8 changes: 4 additions & 4 deletions examples/model_building/plot_bcic_iv_2a_moabb_trial.py
Expand Up @@ -39,7 +39,7 @@
from braindecode.datasets import MOABBDataset

subject_id = 3
dataset = MOABBDataset(dataset_name="BNCI2014001", subject_ids=[subject_id])
dataset = MOABBDataset(dataset_name="BNCI2014_001", subject_ids=[subject_id])


######################################################################
Expand Down Expand Up @@ -131,12 +131,12 @@
######################################################################
# We can easily split the dataset using additional info stored in the
# description attribute, in this case ``session`` column. We select
# ``session_T`` for training and ``session_E`` for validation.
# ``T`` for training and ``test`` for validation.
#

splitted = windows_dataset.split('session')
train_set = splitted['session_T']
valid_set = splitted['session_E']
train_set = splitted['0train'] # Session train
valid_set = splitted['1test'] # Session evaluation


######################################################################
Expand Down
10 changes: 5 additions & 5 deletions examples/model_building/plot_how_train_test_and_tune.py
Expand Up @@ -181,7 +181,7 @@
######################################################################
# We can easily split the dataset BCIC IV 2a dataset using additional
# info stored in the description attribute, in this case the ``session``
# column. We select ``session_T`` for training and ``session_E`` for testing.
# column. We select ``0train`` for training and ``0test`` for testing.
# For other datasets, you might have to choose another column and/or column.
#
# .. note::
Expand All @@ -192,8 +192,8 @@
#

splitted = windows_dataset.split("session")
train_set = splitted["session_T"]
test_set = splitted["session_E"]
train_set = splitted['0train'] # Session train
test_set = splitted['1test'] # Session evaluation


######################################################################
Expand Down Expand Up @@ -412,7 +412,7 @@ def plot_simple_train_test(ax, all_dataset, train_set, test_set):
def plot_train_valid_test(ax, all_dataset, train_subset, val_subset, test_set):
"""Create a sample plot for training, validation, testing."""

bd_cmap = ["#3A6190", "#683E00", "#2196F3", "#DDF2FF",]
bd_cmap = ["#3A6190", "#683E00", "#2196F3", "#DDF2FF", ]

n_train, n_val, n_test = len(train_subset), len(val_subset), len(test_set)
ax.barh("Original\ndataset", len(all_dataset), left=0, height=0.5, color=bd_cmap[0])
Expand Down Expand Up @@ -508,7 +508,7 @@ def plot_train_valid_test(ax, all_dataset, train_subset, val_subset, test_set):
def plot_k_fold(ax, cv, all_dataset, X_train, y_train, test_set):
"""Create a sample plot for training, validation, testing."""

bd_cmap = ["#3A6190", "#683E00", "#2196F3", "#DDF2FF",]
bd_cmap = ["#3A6190", "#683E00", "#2196F3", "#DDF2FF", ]

ax.barh("Original\nDataset", len(all_dataset), left=0, height=0.5, color=bd_cmap[0])

Expand Down
Expand Up @@ -172,12 +172,12 @@
######################################################################
# We can easily split the dataset using additional info stored in the
# description attribute, in this case ``session`` column. We select
# ``session_T`` for training and ``session_E`` for evaluation.
# ``0train`` for training and ``1test`` for evaluation.
#

splitted = windows_dataset.split('session')
train_set = splitted['session_T']
eval_set = splitted['session_E']
train_set = splitted['0train'] # Session train
eval_set = splitted['1test'] # Session evaluation

######################################################################
# Create model
Expand Down
Expand Up @@ -73,7 +73,7 @@
from braindecode.datasets import MOABBDataset

subject_id = 3
dataset = MOABBDataset(dataset_name="BNCI2014001", subject_ids=[subject_id])
dataset = MOABBDataset(dataset_name="BNCI2014_001", subject_ids=[subject_id])

######################################################################
# Preprocessing, the offline transformation of the raw dataset
Expand Down Expand Up @@ -185,7 +185,7 @@
######################################################################
# We can easily split the dataset using additional info stored in the
# description attribute, in this case the ``session`` column. We
# select ``session_T`` for training and ``session_E`` for testing.
# select ``Train`` for training and ``test`` for testing.
# For other datasets, you might have to choose another column.
#
# .. note::
Expand All @@ -196,8 +196,8 @@
#

splitted = windows_dataset.split("session")
train_set = splitted["session_T"]
test_set = splitted["session_E"]
train_set = splitted['0train'] # Session train
test_set = splitted['1test'] # Session evaluation

######################################################################
# Option 1: Pure PyTorch training loop
Expand Down

0 comments on commit 7263a4c

Please sign in to comment.