New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MRG+1: refactor time decoding & temporal generalization #4103

Merged
merged 32 commits into from Mar 31, 2017

Conversation

Projects
None yet
6 participants
@kingjr
Member

kingjr commented Mar 27, 2017

  • change SearchLight and GenLight names
  • update examples and tutorials
  • add cross_val_multiscore
  • deprecate TimeDecoding, GeneralizationAcrossTime, and viz
  • update manual
  • update whatsnew
    closes #4101 #4102 #4068
@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Mar 27, 2017

Member

apart from the choice of the class names I think we're good for a first round of reviews

Member

agramfort commented Mar 27, 2017

apart from the choice of the class names I think we're good for a first round of reviews

@larsoner

Otherwise LGTM.

Show outdated Hide outdated doc/manual/decoding.rst
>>> gat.score(epochs)
>>> gat.plot(vmin=0.1, vmax=0.9, title="Generalization Across Time (faces vs. scrambled)")
>>> clf = LogisticRegression()
>>> tg = GeneralizationLight(clf, n_jobs=1)

This comment has been minimized.

@larsoner

larsoner Mar 27, 2017

Member

more explicit as time_gen?

@larsoner

larsoner Mar 27, 2017

Member

more explicit as time_gen?

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

yes +1 for time_gen

@dengemann

dengemann Mar 28, 2017

Member

yes +1 for time_gen

Show outdated Hide outdated doc/manual/decoding.rst
>>> clf = LogisticRegression()
>>> tg = GeneralizationLight(clf, n_jobs=1)
>>> score = cross_val_multiscore(clf, X=epochs.get_data(),
y=epochs.events[:, 2], cv=4)

This comment has been minimized.

@larsoner

larsoner Mar 27, 2017

Member

isn't cv=10 more standard? best practices etc.

@larsoner

larsoner Mar 27, 2017

Member

isn't cv=10 more standard? best practices etc.

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

or at least 5.

@dengemann

dengemann Mar 28, 2017

Member

or at least 5.

Show outdated Hide outdated doc/manual/decoding.rst
@@ -147,9 +152,16 @@ In this strategy, a model trained on one time window is tested on the same time
:align: center
:width: 400px
To generate this plot, you need to initialize a GAT object and then use the method ``plot_diagonal``::
To generate this plot, you need to use ``SearchLight` over time samples::

This comment has been minimized.

@larsoner

larsoner Mar 27, 2017

Member

missing a backtick on the end of SearchLight

@larsoner

larsoner Mar 27, 2017

Member

missing a backtick on the end of SearchLight

Show outdated Hide outdated mne/decoding/time_gen.py
@@ -872,6 +872,8 @@ def _predict(X, estimators, vectorize_times, predict_method):
return y_pred
@deprecated('GeneralizationAcrossTime is deprecated and will be removed in ',
' 0.15, use _GeneralizationLight instead.')

This comment has been minimized.

@larsoner

larsoner Mar 27, 2017

Member

in 0.16

@larsoner

larsoner Mar 27, 2017

Member

in 0.16

Show outdated Hide outdated doc/manual/decoding.rst
^^^^^^^^^^^^^^^^^^^^^^^^^^
Generalization Across Time (GAT) is a modern strategy to infer neuroscientific conclusions from decoding analysis of sensor-space data. An accuracy matrix is constructed where each point represents the performance of the model trained on one time window and tested on another.
Temporal Generalization is a modern strategy to infer neuroscientific conclusions from decoding analysis of sensor-space data. An accuracy matrix is constructed where each point represents the performance of the model trained on one time window and tested on another.

This comment has been minimized.

@larsoner

larsoner Mar 27, 2017

Member

probably worth adding "formerly referred to here as Generalization Across Time (GAT))" somewhere

@larsoner

larsoner Mar 27, 2017

Member

probably worth adding "formerly referred to here as Generalization Across Time (GAT))" somewhere

This comment has been minimized.

@larsoner

larsoner Mar 28, 2017

Member

Still probably worth referring to the old name so that people who had been using it aren't mystified

@larsoner

larsoner Mar 28, 2017

Member

Still probably worth referring to the old name so that people who had been using it aren't mystified

This comment has been minimized.

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Mar 28, 2017

Member

@dengemann this is all yours !

Member

agramfort commented Mar 28, 2017

@dengemann this is all yours !

.. topic:: Examples:
This strategy consists in fitting a multivariate predictive model on each
time instant and evaluating its performance at the same instant on new

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

instant => point?

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

instant => point?

Show outdated Hide outdated doc/manual/decoding.rst
where one tries to evaluate if the model estimated at a certain time
predicts accuratly at any another instant. It's the idea of transfering
a trained model to another learning problem where here the new problem
corresponds to different time instants.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

I think this paragraph is poorly written tbh

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

I think this paragraph is poorly written tbh

Show outdated Hide outdated doc/manual/decoding.rst
as input :math:`X` and :math:`y` like the :class:`decoding.SearchLight`
but when predicting it will return the prediction of each model
for all time instants. The class :class:`decoding.GeneralizationLight`
is generic and will treat the last dimension as the different tasks.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

"the different task"?

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

"the different task"?

Show outdated Hide outdated doc/manual/decoding.rst
Temporal generalization is an extension of the decoding over time idea
where one tries to evaluate if the model estimated at a certain time
predicts accuratly at any another instant. It's the idea of transfering
a trained model to another learning problem where here the new problem

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

remove here

@dengemann

dengemann Mar 28, 2017

Member

remove here

Show outdated Hide outdated mne/decoding/base.py
The object to use to fit the data.
X : array-like
The data to fit. Can be, for example a list, or an array at least 2d.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

"An iterable where the returned entities are at least 2d"?

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

"An iterable where the returned entities are at least 2d"?

This comment has been minimized.

@kingjr

kingjr Mar 28, 2017

Member

sklearn documentations, won't touch

@kingjr

kingjr Mar 28, 2017

Member

sklearn documentations, won't touch

Show outdated Hide outdated doc/manual/decoding.rst
as input :math:`X` and :math:`y` like the :class:`decoding.SearchLight`
but when predicting it will return the prediction of each model
for all time instants. The class :class:`decoding.GeneralizationLight`
is generic and will treat the last dimension as the different tasks.

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

Maybe do something like:
"is generic and will treat the last dimension as the one to be used for generalization testing. For convenience, here, we refer to it different tasks."

@dengemann

dengemann Mar 28, 2017

Member

Maybe do something like:
"is generic and will treat the last dimension as the one to be used for generalization testing. For convenience, here, we refer to it different tasks."

Show outdated Hide outdated mne/decoding/base.py
y : array-like, optional, default: None
The target variable to try to predict in the case of
supervised learning.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

Sklearn says "Target values"

So maybe "If supervised: target values"

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

Sklearn says "Target values"

So maybe "If supervised: target values"

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

Ah I see, as this is copied from sklearn, it's probs best to keep as is.

@jona-sassenhagen

jona-sassenhagen Mar 28, 2017

Contributor

Ah I see, as this is copied from sklearn, it's probs best to keep as is.

Show outdated Hide outdated mne/decoding/base.py
y : array-like, optional, default: None
The target variable to try to predict in the case of
supervised learning.

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

are these doc strings more or less copied from sklearn?

@dengemann

dengemann Mar 28, 2017

Member

are these doc strings more or less copied from sklearn?

This comment has been minimized.

@kingjr

kingjr Mar 28, 2017

Member

yes

This comment has been minimized.

@dengemann

dengemann Mar 29, 2017

Member

ok. good then

@dengemann

dengemann Mar 29, 2017

Member

ok. good then

Show outdated Hide outdated mne/decoding/base.py
Returns
-------
scores : array of float, shape=(len(list(cv)),) | array of array

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

is this really array of array (in the object array sense?) -- I feel it should just tell the different shapes.

@dengemann

dengemann Mar 28, 2017

Member

is this really array of array (in the object array sense?) -- I feel it should just tell the different shapes.

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

this is sklearn copy-paste

@larsoner

larsoner Mar 31, 2017

Member

this is sklearn copy-paste

Show outdated Hide outdated mne/decoding/base.py
scores = parallel(p_func(clone(estimator), X, y, scorer, train, test,
verbose, None, fit_params)
for train, test in cv_iter)
return np.array(scores)[:, 0, ...]

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

maybe it would be nicer to use a common Python expression for the flattening problem

scores = sum(scores, list())

or

scores = [a for b in scores for a in b]

@dengemann

dengemann Mar 28, 2017

Member

maybe it would be nicer to use a common Python expression for the flattening problem

scores = sum(scores, list())

or

scores = [a for b in scores for a in b]

This comment has been minimized.

@kingjr

kingjr Mar 28, 2017

Member

I prefer retrieving a numpy array

@kingjr

kingjr Mar 28, 2017

Member

I prefer retrieving a numpy array

@dengemann

done with my round of reviews here.

# Adjust length of sample weights
fit_params = fit_params if fit_params is not None else {}
fit_params = dict([(k, _index_param_value(X, v, train))

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

Do we still support Python 2.6? @Eric89GXL
We could probably use the dict comprehension "{k: _index..." or at least do dict((k, _index_param_value(X, v, train)) ...

@dengemann

dengemann Mar 28, 2017

Member

Do we still support Python 2.6? @Eric89GXL
We could probably use the dict comprehension "{k: _index..." or at least do dict((k, _index_param_value(X, v, train)) ...

This comment has been minimized.

@larsoner

larsoner Mar 28, 2017

Member

No we dropped 2.6 like a bad habit

@larsoner

larsoner Mar 28, 2017

Member

No we dropped 2.6 like a bad habit

score = scorer(estimator, X_test)
else:
score = scorer(estimator, X_test, y_test)
if hasattr(score, 'item'):

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

which kind of class do we test for here? maybe testing for the class is easier here.

@dengemann

dengemann Mar 28, 2017

Member

which kind of class do we test for here? maybe testing for the class is easier here.

This comment has been minimized.

@kingjr

kingjr Mar 28, 2017

Member

don't know, this is copied/pasted from sklearn directly

@kingjr

kingjr Mar 28, 2017

Member

don't know, this is copied/pasted from sklearn directly

@requires_sklearn_0_15
def test_cross_val_multiscore():

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

Add comment here for verbose testing.

@dengemann

dengemann Mar 28, 2017

Member

Add comment here for verbose testing.

Show outdated Hide outdated mne/decoding/tests/test_base.py
assert_raises(ValueError, cross_val_multiscore, clf, X, y, cv=cv,
scoring='roc_auc')
clf = SearchLight(LogisticRegression(), scoring='roc_auc')
scores_auc = cross_val_multiscore(clf, X, y, cv=cv, n_jobs=2)

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

can we just use 1 job here? the code path should be the same

@dengemann

dengemann Mar 28, 2017

Member

can we just use 1 job here? the code path should be the same

Show outdated Hide outdated mne/decoding/tests/test_search_light.py
gl = _GeneralizationLight(LogisticRegression())
assert_equal(gl.__repr__()[:22], '<_GeneralizationLight(')
gl = GeneralizationLight(LogisticRegression())
assert_equal(gl.__repr__()[:21], '<GeneralizationLight(')

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

maybe repr(gl)[:21] is more beautiful

@dengemann

dengemann Mar 28, 2017

Member

maybe repr(gl)[:21] is more beautiful

Show outdated Hide outdated mne/decoding/tests/test_search_light.py
@@ -197,7 +197,7 @@ def test_GeneralizationLight():
assert_array_equal(score, manual_score)
# n_jobs
gl = _GeneralizationLight(LogisticRegression(), n_jobs=2)
gl = GeneralizationLight(LogisticRegression(), n_jobs=2)

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

why 2 jobs?

@dengemann

dengemann Mar 28, 2017

Member

why 2 jobs?

This comment has been minimized.

@kingjr

kingjr Mar 28, 2017

Member

I'm testing that test doesn't break for multijobs

@kingjr

kingjr Mar 28, 2017

Member

I'm testing that test doesn't break for multijobs

Show outdated Hide outdated mne/decoding/time_gen.py
@@ -872,6 +872,8 @@ def _predict(X, estimators, vectorize_times, predict_method):
return y_pred
@deprecated('GeneralizationAcrossTime is deprecated and will be removed in '
' 0.15, use _GeneralizationLight instead.')

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

GeneralizationLight

@dengemann

dengemann Mar 28, 2017

Member

GeneralizationLight

Show outdated Hide outdated mne/decoding/time_gen.py
@@ -1203,6 +1205,8 @@ def plot_times(self, train_time, title=None, xmin=None, xmax=None,
legend=legend, chance=chance, label=label)
@deprecated('TimeDecoding is deprecated and will be removed in '
' 0.15, use _SearchLight instead.')

This comment has been minimized.

@dengemann

dengemann Mar 28, 2017

Member

SearchLight

@dengemann

dengemann Mar 28, 2017

Member

SearchLight

@dengemann

This comment has been minimized.

Show comment
Hide comment
@dengemann

dengemann Mar 28, 2017

Member

what's new is conflicting @kingjr ...

Member

dengemann commented Mar 28, 2017

what's new is conflicting @kingjr ...

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Mar 28, 2017

Member

@dengemann, Since we're in coding sprint mode, I'll wait until we're good to go to rebase

Member

kingjr commented Mar 28, 2017

@dengemann, Since we're in coding sprint mode, I'll wait until we're good to go to rebase

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Mar 28, 2017

Member

There is a Travis error related to un-nested nose import. I suspect rebasing might fix it.

Member

larsoner commented Mar 28, 2017

There is a Travis error related to un-nested nose import. I suspect rebasing might fix it.

Show outdated Hide outdated mne/decoding/tests/test_base.py
from ..search_light import _SearchLight
from ..base import (_get_inverse_funcs, LinearModel, get_coef,
cross_val_multiscore)
from ..search_light import SlidingEstimator

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

use full import for tests

@agramfort

agramfort Mar 31, 2017

Member

use full import for tests

Show outdated Hide outdated mne/decoding/tests/test_search_light.py
@@ -7,7 +7,7 @@
from numpy.testing import assert_array_equal
from nose.tools import assert_raises, assert_true, assert_equal
from ...utils import requires_sklearn_0_15
from ..search_light import _SearchLight, _GeneralizationLight
from ..search_light import SlidingEstimator, GeneralizingEstimator
from .. import Vectorizer

This comment has been minimized.

@agramfort

agramfort Mar 31, 2017

Member

full imports needed

@agramfort

agramfort Mar 31, 2017

Member

full imports needed

@agramfort agramfort changed the title from WIP: refactor time decoding & temporal generalization to MRG: refactor time decoding & temporal generalization Mar 31, 2017

kingjr added some commits Mar 31, 2017

@larsoner

Otherwise LGTM

correspond to decoding the patterns of brain activity recorded at distinct time
instants.
The object to for Temporal Generalization is

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

Temporal Generalization here and Temporal generalization above, please make consistent (is it an official algorithm name?)

@larsoner

larsoner Mar 31, 2017

Member

Temporal Generalization here and Temporal generalization above, please make consistent (is it an official algorithm name?)

This comment has been minimized.

@kingjr

kingjr Mar 31, 2017

Member

official naming ( from ref ) with upper case everywher enow

@kingjr

kingjr Mar 31, 2017

Member

official naming ( from ref ) with upper case everywher enow

Show outdated Hide outdated doc/whats_new.rst
@@ -38,6 +40,12 @@ API
- Make the goodness of fit (GOF) of the dipoles returned by :func:`mne.beamformer.rap_music` consistent with the GOF of dipoles returned by :func:`mne.fit_dipole` by `Alex Gramfort`_.
- :class:`mne.decoding.SlidingEstimator` will now replace `mne.decoding.TimeDecoding` to make it generic and fully compatible with scikit-learn, by `Jean-Remi King`_ and `Alexandre Gramfort`_

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

double-backticks on the second one to render as code instead of broken Sphinx link

@larsoner

larsoner Mar 31, 2017

Member

double-backticks on the second one to render as code instead of broken Sphinx link

Show outdated Hide outdated doc/whats_new.rst
@@ -38,6 +40,12 @@ API
- Make the goodness of fit (GOF) of the dipoles returned by :func:`mne.beamformer.rap_music` consistent with the GOF of dipoles returned by :func:`mne.fit_dipole` by `Alex Gramfort`_.
- :class:`mne.decoding.SlidingEstimator` will now replace `mne.decoding.TimeDecoding` to make it generic and fully compatible with scikit-learn, by `Jean-Remi King`_ and `Alexandre Gramfort`_
- :class:`mne.decoding.GeneralizingEstimator` will now replace `mne.decoding.GeneralizationAcrossTime` to make it generic and fully compatible with scikit-learn, by `Jean-Remi King`_ and `Alexandre Gramfort`_

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

same

Show outdated Hide outdated doc/whats_new.rst
- :class:`mne.decoding.GeneralizingEstimator` will now replace `mne.decoding.GeneralizationAcrossTime` to make it generic and fully compatible with scikit-learn, by `Jean-Remi King`_ and `Alexandre Gramfort`_
- :func:`mne.viz.decoding.plot_gat_times`, :func:`mne.viz.decoding.plot_gat_matrix` are now deprecated. Use matplotlib instead as shown in the examples, by `Jean-Remi King`_ and `Alexandre Gramfort`_

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

if they're deprecated don't link to them, they should be removed from doc (but they haven't been?) and so the link will fail

@larsoner

larsoner Mar 31, 2017

Member

if they're deprecated don't link to them, they should be removed from doc (but they haven't been?) and so the link will fail

This comment has been minimized.

@kingjr

kingjr Mar 31, 2017

Member

they will be removed at the next cycle. shall I just double back tick them?

@kingjr

kingjr Mar 31, 2017

Member

they will be removed at the next cycle. shall I just double back tick them?

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

yeah

Show outdated Hide outdated doc/python_reference.rst
SearchLight
GeneralizationLight
SlidingEstimator
GeneralizingEstimator

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

remove the deprecated plotting functions mentioned in whats_new.rst

@larsoner

larsoner Mar 31, 2017

Member

remove the deprecated plotting functions mentioned in whats_new.rst

Show outdated Hide outdated mne/decoding/base.py
Returns
-------
scores : array of float, shape=(len(list(cv)),) | array of array

This comment has been minimized.

@larsoner

larsoner Mar 31, 2017

Member

this is sklearn copy-paste

@larsoner

larsoner Mar 31, 2017

Member

this is sklearn copy-paste

@larsoner larsoner changed the title from MRG: refactor time decoding & temporal generalization to MRG+1: refactor time decoding & temporal generalization Mar 31, 2017

kingjr and others added some commits Mar 31, 2017

@agramfort agramfort merged commit aa4e189 into mne-tools:master Mar 31, 2017

0 of 2 checks passed

ci/circleci CircleCI is running your tests
Details
continuous-integration/appveyor/pr Waiting for AppVeyor build to complete
Details
@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Mar 31, 2017

Codecov Report

Merging #4103 into master will increase coverage by 0.28%.
The diff coverage is 88.06%.

@@            Coverage Diff             @@
##           master    #4103      +/-   ##
==========================================
+ Coverage   86.18%   86.46%   +0.28%     
==========================================
  Files         354      276      -78     
  Lines       63748    60600    -3148     
  Branches     9711     9597     -114     
==========================================
- Hits        54940    52397    -2543     
+ Misses       6130     5741     -389     
+ Partials     2678     2462     -216

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 20cfc04...442ab71. Read the comment docs.

Codecov Report

Merging #4103 into master will increase coverage by 0.28%.
The diff coverage is 88.06%.

@@            Coverage Diff             @@
##           master    #4103      +/-   ##
==========================================
+ Coverage   86.18%   86.46%   +0.28%     
==========================================
  Files         354      276      -78     
  Lines       63748    60600    -3148     
  Branches     9711     9597     -114     
==========================================
- Hits        54940    52397    -2543     
+ Misses       6130     5741     -389     
+ Partials     2678     2462     -216

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 20cfc04...442ab71. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment