Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use cleaned epochs for sensor-space decoding, time-frequency analysis, and covariance calculation #796

Merged
merged 13 commits into from
Oct 30, 2023

Conversation

hoechenberger
Copy link
Member

@hoechenberger hoechenberger commented Oct 25, 2023

We accidentally used the non-cleaned epochs for classification. This issue probably snuck in during the recent set of refactorings.

Before merging …

  • Changelog has been updated (docs/source/changes.md)

We accidentally used the non-cleaned epochs for classification.
This issue probably snuck in during the recent set of refactorings.

@larsoner I've only checked the decoding scripts so far, because
this is what I'm currently working with. I'm not sure if other places
are affected as well. We should check the source-space scripts too.

Could you kindly take over if you find the time?
@hoechenberger hoechenberger added the bug Something isn't working label Oct 25, 2023
@hoechenberger

This comment was marked as outdated.

@hoechenberger hoechenberger mentioned this pull request Oct 26, 2023
@hoechenberger

This comment was marked as outdated.

@hoechenberger

This comment was marked as resolved.

@hoechenberger hoechenberger marked this pull request as ready for review October 30, 2023 12:11
@@ -112,6 +111,9 @@ def run_epochs_decoding(
[epochs[epochs_conds[0]], epochs[epochs_conds[1]]], verbose="error"
)

# Crop to the desired analysis interval.
epochs.crop(cfg.decoding_epochs_tmin, cfg.decoding_epochs_tmax)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any problem with moving this but... how did this fix the failures ?!?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You caught me red-handed there. I believe I'm working around a bug in MNE there. It wouldn't allow me to concatenate epochs where the baseline period was cropped away. So now I first concat, then crop.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's an okay workaround. A comment along those lines would help

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened an issue: mne-tools/mne-python#12153

@larsoner larsoner enabled auto-merge (squash) October 30, 2023 13:26
@larsoner
Copy link
Member

The ds001810 failure seems real, like somehow an extra condition is sneaking into the classification somehow now. @hoechenberger do you have time to check locally if that's the case?

@hoechenberger
Copy link
Member Author

@hoechenberger do you have time to check locally if that's the case?

Unfortunately not, I've been trying to download that dataset for the past hour but it's just hopeless with our firewall here.

@hoechenberger
Copy link
Member Author

@larsoner Btw what I suspect is happening is that we simply end up with too few clean epochs, hence CV won't work

Maybe we can set the number of splits to 2 or even 1... that should fix the issue

@larsoner
Copy link
Member

Ahh makes sense. So use max(5, n_epochs_per_cond_min) or whatever. I'll look and push a commit, including reverting my always: conftest change that hopefully will no longer be necessary.

@hoechenberger
Copy link
Member Author

So use max(5, n_epochs_per_cond_min) or whatever.

I actually do like the fact that we're currently failing hard

We should adjust the test case for that specific dataset instead of auto-adjusting the number of splits

@hoechenberger hoechenberger changed the title Use cleaned epochs for sensor-space decoding Use cleaned epochs for sensor-space decoding and covariance calculation Oct 30, 2023
@hoechenberger hoechenberger changed the title Use cleaned epochs for sensor-space decoding and covariance calculation Use cleaned epochs for sensor-space decoding, time-frequency analysis, and covariance calculation Oct 30, 2023
@hoechenberger
Copy link
Member Author

@larsoner I just tried, I cannot reproduce this locally :(

@hoechenberger
Copy link
Member Author

@larsoner It seems the issue occurs during the cathodalpost session:

│18:59:14│ ⏳️ sub-01 ses-cathodalpost Contrasting conditions: 61450 – 61511
FAILED           [100%]

@hoechenberger
Copy link
Member Author

import mne
epo = mne.read_epochs("sub-01_ses-cathodalpost_task-attentionalblink_proc-clean_epo.fif")
epo

gives:

<EpochsFIF |  25 events (all good), -0.199219 – 0.5 s, baseline -0.199219 – 0 s, ~5.1 MB, data loaded, with metadata,
 '61450': 22
 '61511': 3>

@larsoner larsoner merged commit 8843a38 into mne-tools:main Oct 30, 2023
45 of 46 checks passed
@hoechenberger hoechenberger deleted the use-clean-epochs branch October 30, 2023 19:46
@agramfort
Copy link
Member

@hoechenberger did you check on a few datasets if it boosts the decoding scores? as decoding learns discriminative features it's usually pretty good at filtering out artifacts.

@hoechenberger
Copy link
Member Author

hoechenberger commented Nov 1, 2023

@agramfort No, I did not systematically check this. Are you saying it may have been a conscious decision to use uncleaned epochs here? Because to me, it looked unintentional (i.e., a bug)

Especially since we also used the uncleaned epochs for covariance calculation

@agramfort
Copy link
Member

agramfort commented Nov 1, 2023 via email

@hoechenberger
Copy link
Member Author

Maybe we should add a setting, decoding_use_clean_epochs: bool

@hoechenberger
Copy link
Member Author

The thing is that we always use cleaned epochs to create the Evokeds, and source reconstruction also operates on these clean Evokeds. So we're introducing some sort of an inconsistency if decoding shall operate on un-cleaned epochs.

One could argue that if one doesn't want to decode cleaned epochs, one could just disable SSP/ICA and the PTP rejection step…

@agramfort
Copy link
Member

agramfort commented Nov 1, 2023 via email

@hoechenberger
Copy link
Member Author

@agramfort
Now this is anecdotical if you will, but I'm currently working with EEG data and one recording was particularly contaminated with a certain artifact that would spread across many sensors. Neither full-epochs nor time-by-time decoding could find a difference between two experimental conditions, and I was about to exclude this participant from analysis. I then cleaned the data (autoreject local -> ICA -> autoreject local) and now full-epochs decoding scores very well above chance (~0.9 ROC AUC) and several time periods are above chance level (~0.6 ROC AUC) in time-by-time decoding. For several other participants, decoding performance was improved by this approach too.

That is to say, I did find the expected effect even when using the un-cleaned data, but it's now larger and more obvious

@larsoner
Copy link
Member

larsoner commented Nov 1, 2023

My sense is that things like SSP and ICA probably won't matter but that peak-to-peak rejection could help, depending on the nature of the artifacts. That sounds in line with what you're seeing @hoechenberger . In this case it makes sense for us to use the "clean" epochs for decoding. At some point we could make it an option which to use (but maybe assume YAGNI for now?)

@agramfort
Copy link
Member

agramfort commented Nov 2, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants