New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG+1] Add SPoC spatial filtering for continuous target decoding #4144

Merged
merged 32 commits into from Apr 7, 2017

Conversation

Projects
None yet
7 participants
@alexandrebarachant
Collaborator

alexandrebarachant commented Mar 31, 2017

Source Power Comodulation (SPoC) allows to identify the composition of
orthogonal spatial filters that maximally correlate with a continuous target.
SPoC can be seen as an extension of the CSP for continuous variables.

We also added a new dataset to illustrates this method in an example consisting in decoding left hand EMG power from MEG data. The dataset is fetched from a fieldtrip tutorial (http://www.fieldtriptoolbox.org/tutorial/coherence)

the output looks like this :
figure_1-1

Show outdated Hide outdated mne/datasets/fieldtrip_cmc/__init__.py
@@ -0,0 +1,3 @@
"""fieldtrip CMC Dataset."""

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

CMC -> Cortico-Muscular Coherence (CMC)

@agramfort

agramfort Mar 31, 2017

Member

CMC -> Cortico-Muscular Coherence (CMC)

Show outdated Hide outdated mne/decoding/csp.py
Attributes
----------
``filters_`` : ndarray, shape(n_components, n_channels)

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

missing space after shape

@agramfort

agramfort Mar 31, 2017

Member

missing space after shape

Show outdated Hide outdated mne/decoding/csp.py
----------
``filters_`` : ndarray, shape(n_components, n_channels)
If fit, the SPoC spatial filters, else None.
``patterns_`` : ndarray, shape(n_components, n_channels)

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

missing space after shape

@agramfort

agramfort Mar 31, 2017

Member

missing space after shape

Show outdated Hide outdated mne/decoding/csp.py
% type(X))
self._check_Xy(X, y)
if len(set(y)) < 2:

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

use np.unique

@agramfort

agramfort Mar 31, 2017

Member

use np.unique

Show outdated Hide outdated mne/decoding/csp.py
# The following code is direclty copied from pyRiemann
# Normalize target variable
target = np.float64(y.copy())

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

y.copy().astype(np.float64)

@agramfort

agramfort Mar 31, 2017

Member

y.copy().astype(np.float64)

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

astype makes a copy already, so just y.astype(np.float64)

@larsoner

larsoner Mar 31, 2017

Member

astype makes a copy already, so just y.astype(np.float64)

Show outdated Hide outdated mne/decoding/csp.py
n_epochs, n_channels = X.shape[:2]
# Estimate single trial covariance
covs = np.zeros((n_epochs, n_channels, n_channels))

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

np.empty

@agramfort

agramfort Mar 31, 2017

Member

np.empty

Show outdated Hide outdated mne/decoding/csp.py
# solve eigenvalue decomposition
evals, evecs = linalg.eigh(Cz, C)
evals = evals.real # XXX check

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

XXX ?

Show outdated Hide outdated examples/decoding/plot_decoding_spoc_CMC.py
# Build epochs as sliding windows over the continuous raw file
onsets = raw.first_samp + np.arange(0., raw.n_times, .250 * raw.info['sfreq'])
events = np.c_[onsets, np.zeros((len(onsets), 2))].astype(int)

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

events = np.c_[onsets, np.zeros((len(onsets), 2), dtype=int)]

should make it int

@agramfort

agramfort Mar 31, 2017

Member

events = np.c_[onsets, np.zeros((len(onsets), 2), dtype=int)]

should make it int

@agramfort agramfort changed the title from [ENH] Add SPoC spatial filtering for continuous target decoding to [MRG+1] Add SPoC spatial filtering for continuous target decoding Mar 31, 2017

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Mar 31, 2017

Member

besides my nitpick +1 for MRG when CIs are happy

Member

agramfort commented Mar 31, 2017

besides my nitpick +1 for MRG when CIs are happy

@larsoner

Otherwise LGTM

Show outdated Hide outdated doc/manual/decoding.rst
Source Power Comodulation (SPoC)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Source Power Comodulation (SPoC) [1] allows to identify the composition of orthogonal spatial filters that maximally correlate with a continuous target.

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

[1]_

then later put it in a references section. See how it's done e.g. in doc/manual/datasets_index.rst

@larsoner

larsoner Mar 31, 2017

Member

[1]_

then later put it in a references section. See how it's done e.g. in doc/manual/datasets_index.rst

Show outdated Hide outdated examples/decoding/plot_decoding_spoc_CMC.py
Continuous Target Decoding with SPoC
====================================
Source Power Comodulation (SPoC) [1] allows to identify the composition of

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

[1]_

Show outdated Hide outdated examples/decoding/plot_decoding_spoc_CMC.py
SPoC can be seen as an extension of the CSP for continuous variables.
Here, SPoC is applied to decode the (continuous) fluctuation of an
electromyogram from MEG beta activity [2].

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

This would be [2]_ but it's a website, not an academic reference, so let's just use a ReST link here instead.

@larsoner

larsoner Mar 31, 2017

Member

This would be [2]_ but it's a website, not an academic reference, so let's just use a ReST link here instead.

Show outdated Hide outdated mne/datasets/utils.py
@@ -271,7 +274,8 @@ def _data_path(path=None, force_update=False, update_path=True, download=True,
'tar.gz/%s' % releases['testing'],
multimodal='https://ndownloader.figshare.com/files/5999598',
visual_92_categories='https://mne-tools.s3.amazonaws.com/datasets/%s',
mtrf="https://superb-dca2.dl.sourceforge.net/project/aespa/%s"
mtrf="https://superb-dca2.dl.sourceforge.net/project/aespa/%s",
fieldtrip_cmc="ftp://ftp.fieldtriptoolbox.org/pub/fieldtrip/tutorial/%s"

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

please also modify circle.yml to autodetect when to download this dataset. You can see where the conditionals are for our other datasets

@larsoner

larsoner Mar 31, 2017

Member

please also modify circle.yml to autodetect when to download this dataset. You can see where the conditionals are for our other datasets

This comment has been minimized.

@alexandrebarachant

alexandrebarachant Mar 31, 2017

Collaborator

Can you check circle.yml, i made the modifications

@alexandrebarachant

alexandrebarachant Mar 31, 2017

Collaborator

Can you check circle.yml, i made the modifications

Show outdated Hide outdated mne/decoding/csp.py
class SPoC(CSP):
"""Implementation of the SPoC spatial filtering with Covariance as input.

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

I don't think you need to say "with covariance as input", it doesn't add much. It would be better to do:

"""
Source Power Comodulation (SPoC) spatial filtering.

SPoC [1]_ allows ...
@larsoner

larsoner Mar 31, 2017

Member

I don't think you need to say "with covariance as input", it doesn't add much. It would be better to do:

"""
Source Power Comodulation (SPoC) spatial filtering.

SPoC [1]_ allows ...
Show outdated Hide outdated mne/decoding/csp.py
class SPoC(CSP):
"""Implementation of the SPoC spatial filtering with Covariance as input.
Source Power Comodulation (SPoC) [1] allows to extract spatial filters and

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

[1]_

Show outdated Hide outdated mne/decoding/csp.py
References
----------
[1] Dahne, S., Meinecke, F. C., Haufe, S., Hohne, J., Tangermann, M.,

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

.. [1]

(these are NumPy doc standards FYI)

@larsoner

larsoner Mar 31, 2017

Member

.. [1]

(these are NumPy doc standards FYI)

Show outdated Hide outdated mne/decoding/csp.py
super(SPoC, self).__init__(n_components=n_components, reg=reg, log=log,
cov_est="epoch",
transform_into=transform_into)
delattr(self, 'cov_est')

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

Add a comment why (presumably because it gets estimated per-epoch internally?)

@larsoner

larsoner Mar 31, 2017

Member

Add a comment why (presumably because it gets estimated per-epoch internally?)

Show outdated Hide outdated mne/decoding/csp.py
evecs = evecs[:, ix].T
# spatial patterns
self.patterns_ = linalg.pinv(evecs).T # n_components x n_channels

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

isn't np.allclose(linalg.pinv(evecs), evecs.T) (by orthonormality) and thus np.allclose(linalg.pinv(evecs).T, evecs)?

@larsoner

larsoner Mar 31, 2017

Member

isn't np.allclose(linalg.pinv(evecs), evecs.T) (by orthonormality) and thus np.allclose(linalg.pinv(evecs).T, evecs)?

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

... or if this is to take care of some rank deficiency, you could/should operate on evals directly to determine the cutoff

@larsoner

larsoner Mar 31, 2017

Member

... or if this is to take care of some rank deficiency, you could/should operate on evals directly to determine the cutoff

Xt = spoc.transform(X)
assert_array_equal(Xt.shape, [10, 4, 20])
assert_array_equal(spoc.filters_.shape, [10, 10])
assert_array_equal(spoc.patterns_.shape, [10, 10])

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

How hard would it be to simulate some signal that should clearly work with the algorithm, then verify it works? That would be one step better than these smoke tests that just check shape.

@larsoner

larsoner Mar 31, 2017

Member

How hard would it be to simulate some signal that should clearly work with the algorithm, then verify it works? That would be one step better than these smoke tests that just check shape.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Apr 1, 2017

Contributor

You could just draw random EEG data e, take a random topo M, your y is dot(M, e) + noise, and the correlation between the estimated pattern and M should be high. Right?

@jona-sassenhagen

jona-sassenhagen Apr 1, 2017

Contributor

You could just draw random EEG data e, take a random topo M, your y is dot(M, e) + noise, and the correlation between the estimated pattern and M should be high. Right?

This comment has been minimized.

@alexandrebarachant

alexandrebarachant Apr 3, 2017

Collaborator

We have code for simulation since we were originally working on an example with simulated data.
If i can make it compact enough for the test, i will push it.

@alexandrebarachant

alexandrebarachant Apr 3, 2017

Collaborator

We have code for simulation since we were originally working on an example with simulated data.
If i can make it compact enough for the test, i will push it.

Show outdated Hide outdated examples/decoding/plot_decoding_spoc_CMC.py
onsets = raw.first_samp + np.arange(0, raw.n_times, step)
events = np.c_[onsets, np.zeros((len(onsets), 2), dtype=int)]
# Epoch lenght is 1.5 second

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 31, 2017

Contributor

length

@jona-sassenhagen

jona-sassenhagen Mar 31, 2017

Contributor

length

Show outdated Hide outdated examples/decoding/plot_decoding_spoc_CMC.py
# Build epochs as sliding windows over the continuous raw file
step = int(.250 * raw.info['sfreq'])
onsets = raw.first_samp + np.arange(0, raw.n_times, step)
events = np.c_[onsets, np.zeros((len(onsets), 2), dtype=int)]

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 31, 2017

Contributor

mne.make_fixed_length_events?

@jona-sassenhagen

jona-sassenhagen Mar 31, 2017

Contributor

mne.make_fixed_length_events?

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Mar 31, 2017

Member

@alexandrebarachant you need to rebase

Member

agramfort commented Mar 31, 2017

@alexandrebarachant you need to rebase

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Mar 31, 2017

Member

@robertoostenveld FYI we now fetch a fieldtrip dataset to illustrate this SPoC method.

Member

agramfort commented Mar 31, 2017

@robertoostenveld FYI we now fetch a fieldtrip dataset to illustrate this SPoC method.

@kingjr kingjr referenced this pull request Mar 31, 2017

Open

ENH: decoding module 2017 #3442

26 of 38 tasks complete
@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Mar 31, 2017

Codecov Report

Merging #4144 into master will increase coverage by 0.14%.
The diff coverage is 93.7%.

@@            Coverage Diff             @@
##           master    #4144      +/-   ##
==========================================
+ Coverage   86.02%   86.17%   +0.14%     
==========================================
  Files         354      356       +2     
  Lines       63933    64034     +101     
  Branches     9739     9747       +8     
==========================================
+ Hits        54999    55181     +182     
+ Misses       6245     6159      -86     
- Partials     2689     2694       +5

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8207646...eeb1a43. Read the comment docs.

codecov-io commented Mar 31, 2017

Codecov Report

Merging #4144 into master will increase coverage by 0.14%.
The diff coverage is 93.7%.

@@            Coverage Diff             @@
##           master    #4144      +/-   ##
==========================================
+ Coverage   86.02%   86.17%   +0.14%     
==========================================
  Files         354      356       +2     
  Lines       63933    64034     +101     
  Branches     9739     9747       +8     
==========================================
+ Hits        54999    55181     +182     
+ Misses       6245     6159      -86     
- Partials     2689     2694       +5

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8207646...eeb1a43. Read the comment docs.

@robertoostenveld

This comment has been minimized.

Show comment
Hide comment
@robertoostenveld

robertoostenveld Apr 1, 2017

thanks for the notification.

thanks for the notification.

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Apr 1, 2017

Member

@robertoostenveld of course. Let us know if we can have a better citation to give credit for sharing this data.

Member

agramfort commented Apr 1, 2017

@robertoostenveld of course. Let us know if we can have a better citation to give credit for sharing this data.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 2, 2017

Member

circle modifications look correct to me

Just the testing issue remains I think.

Member

larsoner commented Apr 2, 2017

circle modifications look correct to me

Just the testing issue remains I think.

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Apr 3, 2017

Member

@Eric89GXL merge if you're happy

Member

agramfort commented Apr 3, 2017

@Eric89GXL merge if you're happy

@larsoner

LGTM even without the little suggestions

Show outdated Hide outdated mne/decoding/tests/test_csp.py
# get the patterns and normalize them
patterns = csp.patterns_.T
patterns = patterns / np.apply_along_axis(np.linalg.norm, 0, patterns)

This comment has been minimized.

@larsoner

larsoner Apr 5, 2017

Member

why not patterns /= np.linalg.norm(patterns, axis=0, keepdims=True)?

@larsoner

larsoner Apr 5, 2017

Member

why not patterns /= np.linalg.norm(patterns, axis=0, keepdims=True)?

This comment has been minimized.

@larsoner

larsoner Apr 5, 2017

Member

(for full equivalence you might want a .copy() above you if adopt this)

@larsoner

larsoner Apr 5, 2017

Member

(for full equivalence you might want a .copy() above you if adopt this)

Show outdated Hide outdated mne/decoding/tests/test_csp.py
# generate a orthogonal mixin matrix
mixing_mat = rs.randn(n_channels, n_channels)
_, mixing_mat = np.linalg.eigh(np.dot(mixing_mat.T, mixing_mat))

This comment has been minimized.

@larsoner

larsoner Apr 5, 2017

Member

Due to PCA/SVD equivalence I think this is equivalent to mixing_mat = linalg.svd(rs.randn(n_channels, n_channels))[0] if you want to save a line.

@larsoner

larsoner Apr 5, 2017

Member

Due to PCA/SVD equivalence I think this is equivalent to mixing_mat = linalg.svd(rs.randn(n_channels, n_channels))[0] if you want to save a line.

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

I added a test (for Spoc and CSP) with simulated data to check if we recover the patterns from a instantaneous linear mixing model.
If travis is happy, we are good to go.

Collaborator

alexandrebarachant commented Apr 5, 2017

I added a test (for Spoc and CSP) with simulated data to check if we recover the patterns from a instantaneous linear mixing model.
If travis is happy, we are good to go.

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

Okay, the example takes too much memory. I will see how i can decrease it.
Also, i noticed a stange behaviour of CSP with simulation, the predictive patterns is sorted in last position.
I have to check if it's a regression or if somehow its what is supposed to happens with my simulation.

Collaborator

alexandrebarachant commented Apr 5, 2017

Okay, the example takes too much memory. I will see how i can decrease it.
Also, i noticed a stange behaviour of CSP with simulation, the predictive patterns is sorted in last position.
I have to check if it's a regression or if somehow its what is supposed to happens with my simulation.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 5, 2017

Member

Okay, the example takes too much memory. I will see how i can decrease it.

Yeah I'd try just using a few channels (3?), it might be enough.

Member

larsoner commented Apr 5, 2017

Okay, the example takes too much memory. I will see how i can decrease it.

Yeah I'd try just using a few channels (3?), it might be enough.

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

yes, i will try to downsample spatially

For csp, defenetly a bug. pyRiemann sort the component first on the same data.
Good you pushed for a functional test :)

Collaborator

alexandrebarachant commented Apr 5, 2017

yes, i will try to downsample spatially

For csp, defenetly a bug. pyRiemann sort the component first on the same data.
Good you pushed for a functional test :)

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

Okay, i found the bug.
This is somehow due to the trace normalization of the covariance estimation in CSP : https://github.com/mne-tools/mne-python/blob/master/mne/decoding/csp.py#L170
with the current code and my simulation, the best CSP patterns is sorted last ?? no idea why, so when we do this :

csp = CSP(1)
out = csp.fit_transform(X, y);
plt.plot(scale(out[:, 0]))
plt.plot(scale(y))

image

now if i remove the trace normalization, and execute the same code, we get the expected behavior

image

What do you think ? should we remove the trance normalization ?
This does not impact the performance in the examples.
We had the same problem with spoc and decided not to go for it.

Collaborator

alexandrebarachant commented Apr 5, 2017

Okay, i found the bug.
This is somehow due to the trace normalization of the covariance estimation in CSP : https://github.com/mne-tools/mne-python/blob/master/mne/decoding/csp.py#L170
with the current code and my simulation, the best CSP patterns is sorted last ?? no idea why, so when we do this :

csp = CSP(1)
out = csp.fit_transform(X, y);
plt.plot(scale(out[:, 0]))
plt.plot(scale(y))

image

now if i remove the trace normalization, and execute the same code, we get the expected behavior

image

What do you think ? should we remove the trance normalization ?
This does not impact the performance in the examples.
We had the same problem with spoc and decided not to go for it.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 5, 2017

Member

How does the trace normalization change things? Does it just rescale a covariance matrix? If so, then could it be that the regularization parameter (if there is one) is too large or too small in one of the cases?

Member

larsoner commented Apr 5, 2017

How does the trace normalization change things? Does it just rescale a covariance matrix? If so, then could it be that the regularization parameter (if there is one) is too large or too small in one of the cases?

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

I think i understand the problem.
The trace normalization rescale the matrix to have a unit trace. However, this operation is applied only during the fit. when we transform the data, we just do the variance, with no normalization.

In my simulation, most of the power of the matrix is carried by one pattern, if we normalize the covariance, that will make the difference between condition vanish for the main pattern, and it project it back the non-discriminant patterns. So when we sort the components, it thinks the bad patterns are discriminant.

In real life that does not happens, but i will really suggest to get rid of the trace normalization.

Collaborator

alexandrebarachant commented Apr 5, 2017

I think i understand the problem.
The trace normalization rescale the matrix to have a unit trace. However, this operation is applied only during the fit. when we transform the data, we just do the variance, with no normalization.

In my simulation, most of the power of the matrix is carried by one pattern, if we normalize the covariance, that will make the difference between condition vanish for the main pattern, and it project it back the non-discriminant patterns. So when we sort the components, it thinks the bad patterns are discriminant.

In real life that does not happens, but i will really suggest to get rid of the trace normalization.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 5, 2017

Member

Sounds reasonable to me. But why is the trace normalization there in the first place, then? Is there a paper that suggests doing it or something?

An alternative approach would be to normalize to unit trace during both fit and transform steps. But it sounds like you're saying that this will not work (at least for your simulated data)?

Member

larsoner commented Apr 5, 2017

Sounds reasonable to me. But why is the trace normalization there in the first place, then? Is there a paper that suggests doing it or something?

An alternative approach would be to normalize to unit trace during both fit and transform steps. But it sounds like you're saying that this will not work (at least for your simulated data)?

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

This is something they do in some papers.
I traced this back to the original CSP paper (koles et al., 1990) with the following justification :

The normalization of R [Covariance] with respect to the trace
was done to eliminate magnitude variations in the EEG
between individuals.

it is generally abandoned in more recent works.

I don't think we should do it in the transform either (and i don't think it is possible). If we follows my simulation, then it will screw up interpretation of the patterns (since the output will be predictive, but the patterns will not correspond to the generative pattern).

However, if i remove it, one test checking the rought equivalence between the 'epoch' and 'concat' covariance estimation will fail : https://github.com/mne-tools/mne-python/blob/master/mne/decoding/tests/test_csp.py#L89

with a correlation of ~0.8 instead of 0.94 between the two method

Collaborator

alexandrebarachant commented Apr 5, 2017

This is something they do in some papers.
I traced this back to the original CSP paper (koles et al., 1990) with the following justification :

The normalization of R [Covariance] with respect to the trace
was done to eliminate magnitude variations in the EEG
between individuals.

it is generally abandoned in more recent works.

I don't think we should do it in the transform either (and i don't think it is possible). If we follows my simulation, then it will screw up interpretation of the patterns (since the output will be predictive, but the patterns will not correspond to the generative pattern).

However, if i remove it, one test checking the rought equivalence between the 'epoch' and 'concat' covariance estimation will fail : https://github.com/mne-tools/mne-python/blob/master/mne/decoding/tests/test_csp.py#L89

with a correlation of ~0.8 instead of 0.94 between the two method

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 5, 2017

Member

It seems okay to me to adjust the test, assuming it's an expected result of not normalizing by the trace.

Maybe we should expose a parameter, and deprecate norm_trace=None as True for this release but False for the next.

@kingjr WDYT?

Member

larsoner commented Apr 5, 2017

It seems okay to me to adjust the test, assuming it's an expected result of not normalizing by the trace.

Maybe we should expose a parameter, and deprecate norm_trace=None as True for this release but False for the next.

@kingjr WDYT?

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

@agramfort might also have some thought about the trace normalization

I would prefer not exposing a parameter, the user don't really care about that. It would be actually cleaner to trance normalize the epochs before the CSP if we want to do this right.

Also, I will have to remove the test completely, it seems to change the order of the patterns so the test has no purpose anymore.

Let me benchmark the performances on a different dataset and see if it has an impact on decoding performances.

Collaborator

alexandrebarachant commented Apr 5, 2017

@agramfort might also have some thought about the trace normalization

I would prefer not exposing a parameter, the user don't really care about that. It would be actually cleaner to trance normalize the epochs before the CSP if we want to do this right.

Also, I will have to remove the test completely, it seems to change the order of the patterns so the test has no purpose anymore.

Let me benchmark the performances on a different dataset and see if it has an impact on decoding performances.

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 5, 2017

Collaborator

Okay, trace normalization seems to decrease performances a little (with some big difference for some subjects)

image

if i remove it, all goes back to normal

image

The patterns are actually similar, the main difference is due to their ordering (and therefore their selection)

Collaborator

alexandrebarachant commented Apr 5, 2017

Okay, trace normalization seems to decrease performances a little (with some big difference for some subjects)

image

if i remove it, all goes back to normal

image

The patterns are actually similar, the main difference is due to their ordering (and therefore their selection)

Show outdated Hide outdated mne/decoding/csp.py
@@ -171,7 +171,7 @@ def fit(self, X, y):
weight = len(class_)
# normalize by trace and stack

This comment has been minimized.

@larsoner

larsoner Apr 6, 2017

Member

This comment probably needs to change, too :)

Maybe just mention that trace normalization can break some use cases so we omit it here

@larsoner

larsoner Apr 6, 2017

Member

This comment probably needs to change, too :)

Maybe just mention that trace normalization can break some use cases so we omit it here

This comment has been minimized.

@alexandrebarachant

alexandrebarachant Apr 6, 2017

Collaborator

done.

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Apr 6, 2017

Member

Maybe we should expose a parameter, and deprecate norm_trace=None as True for this release but False for the next. @kingjr WDYT?

+1

Great benchmark @alexandrebarachant !

Member

kingjr commented Apr 6, 2017

Maybe we should expose a parameter, and deprecate norm_trace=None as True for this release but False for the next. @kingjr WDYT?

+1

Great benchmark @alexandrebarachant !

@kingjr

LGTM but add the trace and pattern fixes in whatsnew

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 6, 2017

Collaborator

@kingjr If you have a minute, can you add the changes you requested (it's your branch :) ) I will not have time to work on this today.

Collaborator

alexandrebarachant commented Apr 6, 2017

@kingjr If you have a minute, can you add the changes you requested (it's your branch :) ) I will not have time to work on this today.

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Apr 6, 2017

Member

@alexandrebarachant this double branch turned out to be a nightmare. I started from scratch. I think I got everything, but can you review it?

Member

kingjr commented Apr 6, 2017

@alexandrebarachant this double branch turned out to be a nightmare. I started from scratch. I think I got everything, but can you review it?

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 6, 2017

Collaborator

There was a bunch of file related to the new dataset.

I still have my local branch clean, just missing your last commit ...

Collaborator

alexandrebarachant commented Apr 6, 2017

There was a bunch of file related to the new dataset.

I still have my local branch clean, just missing your last commit ...

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 6, 2017

Collaborator

okay, i will force my branch, there is too many missing things.
added your modification so it should be fine

Collaborator

alexandrebarachant commented Apr 6, 2017

okay, i will force my branch, there is too many missing things.
added your modification so it should be fine

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Apr 7, 2017

Collaborator

Okay, I think we are good.

ready for a merge on my side.

Collaborator

alexandrebarachant commented Apr 7, 2017

Okay, I think we are good.

ready for a merge on my side.

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Apr 7, 2017

Member

Great, I just reviewed it again and it LGTM

@Eric89GXL or @agramfort merge if you're happy

Member

kingjr commented Apr 7, 2017

Great, I just reviewed it again and it LGTM

@Eric89GXL or @agramfort merge if you're happy

@larsoner larsoner merged commit 1239f23 into mne-tools:master Apr 7, 2017

0 of 3 checks passed

ci/circleci CircleCI is running your tests
Details
continuous-integration/appveyor/pr Waiting for AppVeyor build to complete
Details
continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
@larsoner

This comment has been minimized.

Show comment
Hide comment
Member

larsoner commented Apr 7, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment