New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: refactor XDawn to use rERP #3563

Open
wants to merge 17 commits into
from

Conversation

Projects
None yet
8 participants
@jona-sassenhagen
Contributor

jona-sassenhagen commented Sep 2, 2016

Internally, XDawn and linear_regression_raw (rERP) do basically the same thing. rERP is currently set up to do some things that aren't required of XDawn (particularly continuous predictors), so it makes sense to have XDawn call rERP "under the hood" instead of the reverse. This PR begins to do so.

I assume a bunch of stuff doesn't work so far, but at least some tests pass and the main example works:

screen shot 2016-09-02 at 15 10 34

closes #2332
An issue over 1 year old!

@alexandrebarachant @kingjr

Show outdated Hide outdated mne/preprocessing/tests/test_xdawn.py
@@ -136,6 +136,9 @@ def test_xdawn_regularization():
modified_event = events[sel[0]]
modified_event[0] += 1
epochs.events[sel[1]] = modified_event
sel = np.where(events[:, 2] == 2)[0][:2]
epochs.events[sel[1], 0] = events[sel[1], 0] + 1
epochs.events[sel[1], 2] = events[sel[1], 2]

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

Not sure what exactly is going on here. Needs to be more robust.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

Not sure what exactly is going on here. Needs to be more robust.

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

I think it is basically moves the start of one event to the start of the next event + 1. This aims at creating an overlap.

@kingjr

kingjr Sep 2, 2016

Member

I think it is basically moves the start of one event to the start of the next event + 1. This aims at creating an overlap.

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

But yeah, I agree this is not sufficiently explicit or robust

@kingjr

kingjr Sep 2, 2016

Member

But yeah, I agree this is not sufficiently explicit or robust

Show outdated Hide outdated mne/preprocessing/xdawn.py
@@ -16,7 +16,7 @@
from ..io.pick import _pick_data_channels
from ..utils import logger
from ..externals.six import iteritems, itervalues
from ..stats.regression import _prepare_rerp_data, _prepare_rerp_preds, _solver

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

XDawn on master uses a pinv solver, this is a cholesky solver.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

XDawn on master uses a pinv solver, this is a cholesky solver.

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

Which one is best in you opinion? cholesky is faster right?

@kingjr

kingjr Sep 2, 2016

Member

Which one is best in you opinion? cholesky is faster right?

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

It's faster for the scenarios I tested it on (which were, admittedly, a bunch). For the XDawn case, with a low number of sparse predictors and short time windows (compared to what I'm usually doing: many predictors, including a few dense ones, and some very long time windows), it should usually be really fast anyways.

If it's not sufficiently fast, I'll have to change. The modularity actually makes it really easy to swap in another solver.

The drawback I experienced with the cholesky vs. the pinv based one was that the cholesky one is a bit less stable.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

It's faster for the scenarios I tested it on (which were, admittedly, a bunch). For the XDawn case, with a low number of sparse predictors and short time windows (compared to what I'm usually doing: many predictors, including a few dense ones, and some very long time windows), it should usually be really fast anyways.

If it's not sufficiently fast, I'll have to change. The modularity actually makes it really easy to swap in another solver.

The drawback I experienced with the cholesky vs. the pinv based one was that the cholesky one is a bit less stable.

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

ok, if it's not numerically the same, I would add a 'solver' string param to Xdawn, but keep it to pinv by default to facilitate the deprecation cycle.

The XdawnTransformer could be using the cholevsky by default.

@kingjr

kingjr Sep 2, 2016

Member

ok, if it's not numerically the same, I would add a 'solver' string param to Xdawn, but keep it to pinv by default to facilitate the deprecation cycle.

The XdawnTransformer could be using the cholevsky by default.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

It should be numerically identical to within, or at least very close to, machine precision for non-degenerate cases. Only when the matrix becomes very ill conditioned does the (non-regularized) cholesky perform a bit worse. On the other hand, it's very easy to regularize.

But yes, I wouldn't mind having both. Giving it a 'solver' kwarg param means it's easy to e.g. regularize.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

It should be numerically identical to within, or at least very close to, machine precision for non-degenerate cases. Only when the matrix becomes very ill conditioned does the (non-regularized) cholesky perform a bit worse. On the other hand, it's very easy to regularize.

But yes, I wouldn't mind having both. Giving it a 'solver' kwarg param means it's easy to e.g. regularize.

Show outdated Hide outdated mne/preprocessing/xdawn.py
# shape data correctly (split by class)
evokeds = np.asarray(np.hsplit(coefs, len(classes)))
return evokeds, np.vsplit(X.toarray().T, len(classes))

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

linear_regression_raw splits by class only when it constructs the Evoked objects. I guess that could be used here instead, but this is shorter.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

linear_regression_raw splits by class only when it constructs the Evoked objects. I guess that could be used here instead, but this is shorter.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

linear_regression_raw has a more complex task when constructing the evokeds because predictors can be selected to have different lengths.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

linear_regression_raw has a more complex task when constructing the evokeds because predictors can be selected to have different lengths.

@kingjr kingjr referenced this pull request Sep 2, 2016

Open

ENH: decoding module 2017 #3442

26 of 38 tasks complete
@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Sep 2, 2016

Member

An issue over 1 year old!

Great thanks for taking care of this

Member

kingjr commented Sep 2, 2016

An issue over 1 year old!

Great thanks for taking care of this

@jona-sassenhagen jona-sassenhagen changed the title from WIP: init refactor XDawn to use rERP to WIP: refactor XDawn to use rERP Sep 2, 2016

Show outdated Hide outdated mne/stats/regression.py
@@ -275,24 +271,30 @@ def solver(X, y):
return evokeds
def _prepare_rerp_data(raw, events, picks=None, decim=1):
def _prepare_rerp_data(raw, events, picks=None, decim=1, sfreq=None):
"""Prepare events and data, primarily for `linear_regression_raw`. See
there for an explanation of parameters and output."""

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

Add in docstring that raw can be mne Raw object or numpy arrays for future dev

@kingjr

kingjr Sep 2, 2016

Member

Add in docstring that raw can be mne Raw object or numpy arrays for future dev

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

I'm a bit inclined to rename all the XDawn variables that sound like MNE class instances - raw, evoked etc - but are actually just np arrays, because all of these fooled me when doing this.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

I'm a bit inclined to rename all the XDawn variables that sound like MNE class instances - raw, evoked etc - but are actually just np arrays, because all of these fooled me when doing this.

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

Also, maybe it would make sense to update linear_regression_raw to be less reliant on MNE classes, and more on bare np arrays anyways.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

Also, maybe it would make sense to update linear_regression_raw to be less reliant on MNE classes, and more on bare np arrays anyways.

Show outdated Hide outdated mne/stats/regression.py
def _solver(X, y):
a = (X.T * X).toarray() # dot product of sparse matrices

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

I think there's a fast_dot somewhere, but I don't know whether it's worth it here

@kingjr

kingjr Sep 2, 2016

Member

I think there's a fast_dot somewhere, but I don't know whether it's worth it here

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

This is guaranteed to be a sparse matrix, so the fast_dot doesn't help, I think.

@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

This is guaranteed to be a sparse matrix, so the fast_dot doesn't help, I think.

This comment has been minimized.

@jaeilepp

jaeilepp Sep 2, 2016

Contributor

The whole function is over indented.

@jaeilepp

jaeilepp Sep 2, 2016

Contributor

The whole function is over indented.

Show outdated Hide outdated mne/stats/regression.py
def _solver(X, y):
a = (X.T * X).toarray() # dot product of sparse matrices
return linalg.solve(a, X.T * y.T, sym_pos=True,
overwrite_a=True, overwrite_b=True).T

This comment has been minimized.

@kingjr

kingjr Sep 2, 2016

Member

alignment

@kingjr

kingjr Sep 2, 2016

Member

alignment

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

While I'm at it, I'm considering giving a string option to use ridge regression from sklearn as the solver (it's what I do at home). @kingjr @agramfort @alexandrebarachant ?

Also, the pinv solver is not precisely the same - linear_regression_raw creates X as sparse, whereas XDawn on master creates it as dense.

Contributor

jona-sassenhagen commented Sep 2, 2016

While I'm at it, I'm considering giving a string option to use ridge regression from sklearn as the solver (it's what I do at home). @kingjr @agramfort @alexandrebarachant ?

Also, the pinv solver is not precisely the same - linear_regression_raw creates X as sparse, whereas XDawn on master creates it as dense.

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

In #3425, I had a different idea of how to approach this:

from mne.stats.regression import _prepare_rerp_preds, _make_evokeds
from mne.preprocessing.xdawn import _construct_signal_from_epochs
from sklearn.linear_model import Ridge
from scipy import sparse

raw_ = _construct_signal_from_epochs(
    events, epochs.info["sfreq"], tmin, tmax, epochs.get_data())

X, conds, cond_length, tmin_s, tmax_s = _prepare_rerp_preds(
    raw_.n_times, raw_.info["sfreq"], mne.find_events(raw_),
    event_id, tmin, tmax, stack=False)
coefs = Ridge(alpha=0).fit(sparse.hstack(X), raw_._data).coefs_
evokeds = _make_evokeds(coefs, conds, cond_length, tmin_s, tmax_s, raw_.info)
toeplitzs = [t_.toarray() for t_ in X]

This would require changing the linear_regression_raw code a bit more. But I'm not sure anymore it actually works.

But right now, _prepare_rerp_preds is stacking, and xdawn is later splitting again. That's a bit uneconomic. I'll think about it.

Contributor

jona-sassenhagen commented Sep 2, 2016

In #3425, I had a different idea of how to approach this:

from mne.stats.regression import _prepare_rerp_preds, _make_evokeds
from mne.preprocessing.xdawn import _construct_signal_from_epochs
from sklearn.linear_model import Ridge
from scipy import sparse

raw_ = _construct_signal_from_epochs(
    events, epochs.info["sfreq"], tmin, tmax, epochs.get_data())

X, conds, cond_length, tmin_s, tmax_s = _prepare_rerp_preds(
    raw_.n_times, raw_.info["sfreq"], mne.find_events(raw_),
    event_id, tmin, tmax, stack=False)
coefs = Ridge(alpha=0).fit(sparse.hstack(X), raw_._data).coefs_
evokeds = _make_evokeds(coefs, conds, cond_length, tmin_s, tmax_s, raw_.info)
toeplitzs = [t_.toarray() for t_ in X]

This would require changing the linear_regression_raw code a bit more. But I'm not sure anymore it actually works.

But right now, _prepare_rerp_preds is stacking, and xdawn is later splitting again. That's a bit uneconomic. I'll think about it.

@codecov-io

This comment has been minimized.

Show comment
Hide comment
@codecov-io

codecov-io Sep 2, 2016

Codecov Report

Merging #3563 into master will decrease coverage by <.01%.
The diff coverage is 82.43%.

@@            Coverage Diff             @@
##           master    #3563      +/-   ##
==========================================
- Coverage   86.18%   86.18%   -0.01%     
==========================================
  Files         354      354              
  Lines       63729    63749      +20     
  Branches     9709     9713       +4     
==========================================
+ Hits        54927    54939      +12     
- Misses       6127     6134       +7     
- Partials     2675     2676       +1

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a092868...02afac0. Read the comment docs.

codecov-io commented Sep 2, 2016

Codecov Report

Merging #3563 into master will decrease coverage by <.01%.
The diff coverage is 82.43%.

@@            Coverage Diff             @@
##           master    #3563      +/-   ##
==========================================
- Coverage   86.18%   86.18%   -0.01%     
==========================================
  Files         354      354              
  Lines       63729    63749      +20     
  Branches     9709     9713       +4     
==========================================
+ Hits        54927    54939      +12     
- Misses       6127     6134       +7     
- Partials     2675     2676       +1

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a092868...02afac0. Read the comment docs.

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Sep 2, 2016

Contributor

Amazingly, we are green (but for reduced coverage because I'm not checking the pinv solver for linear_regression_raw nor the cho solver for xdawn).

Contributor

jona-sassenhagen commented Sep 2, 2016

Amazingly, we are green (but for reduced coverage because I'm not checking the pinv solver for linear_regression_raw nor the cho solver for xdawn).

Show outdated Hide outdated mne/preprocessing/xdawn.py
if isinstance(solver, string_types):
if solver == "pinv":
from ..stats.regression import _pinv_solver as solver
elif solver == 'cholesky': # noqa

This comment has been minimized.

@agramfort

agramfort Sep 3, 2016

Member

why this noqa?

@agramfort

agramfort Sep 3, 2016

Member

why this noqa?

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 3, 2016

Contributor

Cause I'm stupid. I wanted to avoid this being counted for coverage, I'll remove it.

I don't get what lines are. It being checked/why I get these coverage drops?

@jona-sassenhagen

jona-sassenhagen Sep 3, 2016

Contributor

Cause I'm stupid. I wanted to avoid this being counted for coverage, I'll remove it.

I don't get what lines are. It being checked/why I get these coverage drops?

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 3, 2016

Contributor

I meant: I don't get what lines are counted as not being checked/why I get the drop in coverage.

@jona-sassenhagen

jona-sassenhagen Sep 3, 2016

Contributor

I meant: I don't get what lines are counted as not being checked/why I get the drop in coverage.

This comment has been minimized.

@kingjr

kingjr Sep 6, 2016

Member

I think it's because you don't have a test that validate solver in ['pinv', 'cholesky']

@kingjr

kingjr Sep 6, 2016

Member

I think it's because you don't have a test that validate solver in ['pinv', 'cholesky']

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

I don't want to test all options ... one is default for XDawn, the other for rERP. See:

I'm not checking the pinv solver for linear_regression_raw nor the cho solver for xdawn

Any solution?

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

I don't want to test all options ... one is default for XDawn, the other for rERP. See:

I'm not checking the pinv solver for linear_regression_raw nor the cho solver for xdawn

Any solution?

Show outdated Hide outdated mne/stats/regression.py
overwrite_a=True, overwrite_b=True).T
solver = _cho_solver
elif solver == "pinv": # noqa

This comment has been minimized.

@agramfort

agramfort Sep 3, 2016

Member

same remark

@agramfort

agramfort Sep 3, 2016

Member

same remark

Show outdated Hide outdated mne/preprocessing/xdawn.py
@@ -62,6 +69,12 @@ def _least_square_evoked(epochs_data, events, tmin, sfreq):
An concatenated array of toeplitz matrix for each event type.
"""
if isinstance(solver, string_types):

This comment has been minimized.

@jaeilepp

jaeilepp Sep 5, 2016

Contributor

IMO this check is quite unnecessary. This is a private function so you can just check

if solver == "pinv":
    ...
elif solver == 'cholesky':
    ...
@jaeilepp

jaeilepp Sep 5, 2016

Contributor

IMO this check is quite unnecessary. This is a private function so you can just check

if solver == "pinv":
    ...
elif solver == 'cholesky':
    ...

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Sep 5, 2016

Contributor

Yes, makes sense. 
Waiting with further steps for comments by @alexandrebarachant ...

@jona-sassenhagen

jona-sassenhagen Sep 5, 2016

Contributor

Yes, makes sense. 
Waiting with further steps for comments by @alexandrebarachant ...

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Sep 6, 2016

Member

While I'm at it, I'm considering giving a string option to use ridge regression from sklearn as the solver (it's what I do at home). @kingjr @agramfort @alexandrebarachant ?

Ok, don't see any reason why not.

Also, the pinv solver is not precisely the same - linear_regression_raw creates X as sparse, whereas XDawn on master creates it as dense.

Is this an issue? Can you change the init of dtype?

Member

kingjr commented Sep 6, 2016

While I'm at it, I'm considering giving a string option to use ridge regression from sklearn as the solver (it's what I do at home). @kingjr @agramfort @alexandrebarachant ?

Ok, don't see any reason why not.

Also, the pinv solver is not precisely the same - linear_regression_raw creates X as sparse, whereas XDawn on master creates it as dense.

Is this an issue? Can you change the init of dtype?

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Sep 12, 2016

Member

@jona-sassenhagen How are we doing over here?

Member

kingjr commented Sep 12, 2016

@jona-sassenhagen How are we doing over here?

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Sep 13, 2016

Contributor

Not much - 1. I'm on vacation, 2. I'd prefer to wait for a comment by @alexandrebarachant

Contributor

jona-sassenhagen commented Sep 13, 2016

Not much - 1. I'm on vacation, 2. I'd prefer to wait for a comment by @alexandrebarachant

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

@alexandrebarachant maybe you have time to check now ..?

Contributor

jona-sassenhagen commented Oct 24, 2016

@alexandrebarachant maybe you have time to check now ..?

Show outdated Hide outdated mne/preprocessing/xdawn.py
@@ -53,6 +54,12 @@ def _least_square_evoked(epochs_data, events, tmin, sfreq):
Start time before event.
sfreq : float
Sampling frequency.
solver : str | function
Either a function which takes as its inputs the sparse predictor

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

remove 'as its inputs'

@kingjr

kingjr Oct 24, 2016

Member

remove 'as its inputs'

Show outdated Hide outdated mne/preprocessing/xdawn.py
if solver == "pinv":
from ..stats.regression import _pinv_solver as solver
elif solver == 'cholesky': # noqa

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

why noqa?

@kingjr

kingjr Oct 24, 2016

Member

why noqa?

Show outdated Hide outdated mne/stats/regression.py
@@ -228,7 +228,8 @@ def linear_regression_raw(raw, events, event_id=None, tmin=-.1, tmax=1,
Either a function which takes as its inputs the sparse predictor
matrix X and the observation matrix Y, and returns the coefficient
matrix b; or a string. If str, must be ``'cholesky'``, in which case
the solver used is ``linalg.solve(dot(X.T, X), dot(X.T, y))``.
the solver used is ``linalg.solve(dot(X.T, X), dot(X.T, y))``, or
``'pinv'``, in which a solver based on a pseudo-inverse is used.

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

case

Show outdated Hide outdated mne/stats/regression.py
solver = _cho_solver
elif solver == "pinv":
solver = _pinv_solver

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

I'm confused, how is this not redundant with L74

@kingjr

kingjr Oct 24, 2016

Member

I'm confused, how is this not redundant with L74

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

One for xdawn, one for rERP.

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

One for xdawn, one for rERP.

This comment has been minimized.

@larsoner

larsoner Oct 24, 2016

Member

I think he means in file xdawn.py. Might be better to make a little helper _get_solver(kind) and use it here and in xdawn.py

@larsoner

larsoner Oct 24, 2016

Member

I think he means in file xdawn.py. Might be better to make a little helper _get_solver(kind) and use it here and in xdawn.py

Show outdated Hide outdated mne/stats/regression.py
if hasattr(raw, "info"):
if picks is None:
picks = pick_types(raw.info, meg=True, eeg=True, ref_meg=True)

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

ecog seeg etc. See private function data_channels

@kingjr

kingjr Oct 24, 2016

Member

ecog seeg etc. See private function data_channels

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

Where is it?..

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

Where is it?..

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

mne.io.pick._pick_data_channels

@kingjr

kingjr Oct 24, 2016

Member

mne.io.pick._pick_data_channels

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

Thanks

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

Thanks

Show outdated Hide outdated mne/stats/regression.py
picks = pick_types(raw.info, meg=True, eeg=True, ref_meg=True)
info = pick_info(raw.info, picks)
info["sfreq"] /= decim
data, times = raw[:]

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

wow, didn't know that

@kingjr

kingjr Oct 24, 2016

Member

wow, didn't know that

Show outdated Hide outdated mne/stats/regression.py
def _cho_solver(X, y):
a = (X.T * X).toarray() # dot product of sparse matrices
return linalg.solve(a, X.T * y.T, sym_pos=True,
overwrite_a=True, overwrite_b=True).T

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

the overwrite isn't an issue for raw?

@kingjr

kingjr Oct 24, 2016

Member

the overwrite isn't an issue for raw?

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

It's not overwriting X or y, but (X.T * X).toarray() and X.T * y.T right?

@jona-sassenhagen

jona-sassenhagen Oct 24, 2016

Contributor

It's not overwriting X or y, but (X.T * X).toarray() and X.T * y.T right?

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

I would manually check it. X.T is identical to X in terms of data.

@kingjr

kingjr Oct 24, 2016

Member

I would manually check it. X.T is identical to X in terms of data.

This comment has been minimized.

@kingjr

kingjr Oct 24, 2016

Member

my bad you're right

@kingjr

kingjr Oct 24, 2016

Member

my bad you're right

This comment has been minimized.

@larsoner

larsoner Oct 24, 2016

Member

X.T is indeed a view (identical memory/data), but the results of multiplications will not be views, they will be new arrays

@larsoner

larsoner Oct 24, 2016

Member

X.T is indeed a view (identical memory/data), but the results of multiplications will not be views, they will be new arrays

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Oct 26, 2016

Contributor

Comments addressed (@kingjr)

XDawn examples output still look good ... Maybe if @alexandrebarachant doesn't show up, we should consider merging if it turns up green.

Contributor

jona-sassenhagen commented Oct 26, 2016

Comments addressed (@kingjr)

XDawn examples output still look good ... Maybe if @alexandrebarachant doesn't show up, we should consider merging if it turns up green.

trig = np.zeros((n_samples, 1))
ix_trig = (events[sel, 0]) + n_min
trig[ix_trig] = 1
toeplitz.append(linalg.toeplitz(trig[0:window], trig))

This comment has been minimized.

@agramfort

agramfort Oct 27, 2016

Member

where did this fancy code go?

are we sure that rERP code is as efficient as this one?

@agramfort

agramfort Oct 27, 2016

Member

where did this fancy code go?

are we sure that rERP code is as efficient as this one?

This comment has been minimized.

@jona-sassenhagen

jona-sassenhagen Oct 27, 2016

Contributor

That's being handled inside _prepare_rerp_preds now. The most clearly similar part is in line 355, with sparse.dia_matrix.

It used to be toeplitz there, too, IIRC, but we went with sparse after a while.

Timing the xdawn test, I get this patch making things a bit slower (~15%) all in all. Haven't tested it on a real use scenario, maybe @kingjr has one?

@jona-sassenhagen

jona-sassenhagen Oct 27, 2016

Contributor

That's being handled inside _prepare_rerp_preds now. The most clearly similar part is in line 355, with sparse.dia_matrix.

It used to be toeplitz there, too, IIRC, but we went with sparse after a while.

Timing the xdawn test, I get this patch making things a bit slower (~15%) all in all. Haven't tested it on a real use scenario, maybe @kingjr has one?

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Oct 27, 2016

Member
Member

agramfort commented Oct 27, 2016

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Oct 27, 2016

Contributor

That's not unexpected - rERP is a more general technique, so it does a bit more work.

Also, Xdawn defaults to using a slightly slower pinv solver, whereas rERP is using a cholesky-based solver.

That's without benchmarking it. It's possible rERP could be optimized further.

Contributor

jona-sassenhagen commented Oct 27, 2016

That's not unexpected - rERP is a more general technique, so it does a bit more work.

Also, Xdawn defaults to using a slightly slower pinv solver, whereas rERP is using a cholesky-based solver.

That's without benchmarking it. It's possible rERP could be optimized further.

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Oct 28, 2016

Member

@alexandrebarachant any thought on this? possible to squeeze some time from this new approach?

a challenge for @Eric89GXL ?

Member

agramfort commented Oct 28, 2016

@alexandrebarachant any thought on this? possible to squeeze some time from this new approach?

a challenge for @Eric89GXL ?

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Oct 28, 2016

Contributor

@agramfort a major speed (and memory) improvement could be expected to come from fully implementing NJ Smith's original algorithm, which does incremental fitting. See
https://github.com/rerpy/rerpy/blob/master/rerpy/rerp.py#L525
and particularly
https://github.com/rerpy/rerpy/blob/master/rerpy/rerp.py#L1314

Contributor

jona-sassenhagen commented Oct 28, 2016

@agramfort a major speed (and memory) improvement could be expected to come from fully implementing NJ Smith's original algorithm, which does incremental fitting. See
https://github.com/rerpy/rerpy/blob/master/rerpy/rerp.py#L525
and particularly
https://github.com/rerpy/rerpy/blob/master/rerpy/rerp.py#L1314

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Oct 31, 2016

Contributor

@agramfort what do we do if @alexandrebarachant never comes around to this? :)

Contributor

jona-sassenhagen commented Oct 31, 2016

@agramfort what do we do if @alexandrebarachant never comes around to this? :)

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Oct 31, 2016

Member

So what alex b and I do here exactly? Do you need extra code review, or benchmarking?

Member

kingjr commented Oct 31, 2016

So what alex b and I do here exactly? Do you need extra code review, or benchmarking?

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Oct 31, 2016

Contributor

Yeah, can you maybe check if you get good results, and how much less speed, with real data of yours?
For Alex b I'd be interested in a check of code sanity. 

Contributor

jona-sassenhagen commented Oct 31, 2016

Yeah, can you maybe check if you get good results, and how much less speed, with real data of yours?
For Alex b I'd be interested in a check of code sanity. 

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Nov 1, 2016

Member

Numerically the same scores, in terms of speed there is no supper clear pattern; probably a bit slower
xdawn_jona

Member

kingjr commented Nov 1, 2016

Numerically the same scores, in terms of speed there is no supper clear pattern; probably a bit slower
xdawn_jona

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Mar 29, 2017

Contributor

I still have local changes ... Essentially this:

            if (len(values) * 2) > len(events):
                if not has_warned:
                    has_warned = True
                    warn("The predictor {} is dense - that is, non-zero at "
                         "more than half of the samples. This function is not"
                         " optimized for continuous predictors; consider "
                         "`mne.decoding.ReceptiveField` instead.".format(cond))
                else:
                    warn("{} is also dense.".format(cond)

and docs.
But maybe this can also work until the receptive field module is optimized.

Contributor

jona-sassenhagen commented Mar 29, 2017

I still have local changes ... Essentially this:

            if (len(values) * 2) > len(events):
                if not has_warned:
                    has_warned = True
                    warn("The predictor {} is dense - that is, non-zero at "
                         "more than half of the samples. This function is not"
                         " optimized for continuous predictors; consider "
                         "`mne.decoding.ReceptiveField` instead.".format(cond))
                else:
                    warn("{} is also dense.".format(cond)

and docs.
But maybe this can also work until the receptive field module is optimized.

Show outdated Hide outdated mne/stats/regression.py
" optimized for continuous predictors; consider "
"`mne.decoding.ReceptiveField` instead.".format(cond))
else:
warn("{} is also dense.".format(cond)

This comment has been minimized.

@alexandrebarachant

alexandrebarachant Mar 29, 2017

Collaborator

syntax error here. missing a parenthesis

@alexandrebarachant

alexandrebarachant Mar 29, 2017

Collaborator

syntax error here. missing a parenthesis

jona-sassenhagen added some commits Mar 30, 2017

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Mar 30, 2017

Contributor

We are green!

Contributor

jona-sassenhagen commented Mar 30, 2017

We are green!

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Mar 30, 2017

Contributor

@agramfort I added a warning and pointed towards @choldgraf 's receptive_field module for dense, that's all.

Contributor

jona-sassenhagen commented Mar 30, 2017

@agramfort I added a warning and pointed towards @choldgraf 's receptive_field module for dense, that's all.

@agramfort

This comment has been minimized.

Show comment
Hide comment
@agramfort

agramfort Mar 30, 2017

Member
Member

agramfort commented Mar 30, 2017

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Mar 30, 2017

Contributor
Contributor

jona-sassenhagen commented Mar 30, 2017

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Mar 30, 2017

Collaborator

I ran a couple of test on P300 dataset, looks good to me.

Collaborator

alexandrebarachant commented Mar 30, 2017

I ran a couple of test on P300 dataset, looks good to me.

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Mar 30, 2017

Collaborator

okay, for dataset P300 BNCI 003-2015, (2700 events)

before the PR, 1.2 second

Line #    Mem usage    Increment   Line Contents
================================================
    54  191.078 MiB    0.000 MiB   @profile
    55                             def my_func():
    56  191.078 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    57  218.562 MiB   27.484 MiB       xd.fit(ep)

with the new version , 0.37 second

Line #    Mem usage    Increment   Line Contents
================================================
    55  188.859 MiB    0.000 MiB   @profile
    56                             def my_func():
    57  188.859 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    58  197.691 MiB    8.832 MiB       xd.fit(ep)

with another dataset (~5000 events)

before : 8 seconds

Line #    Mem usage    Increment   Line Contents
================================================
    55 1399.711 MiB    0.000 MiB   @profile
    56                             def my_func():
    57 1399.711 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    58 1410.000 MiB   10.289 MiB       xd.fit(ep)

after : 2.6 second

Line #    Mem usage    Increment   Line Contents
================================================
    55 1399.578 MiB    0.000 MiB   @profile
    56                             def my_func():
    57 1399.578 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    58 1456.324 MiB   56.746 MiB       xd.fit(ep)
Collaborator

alexandrebarachant commented Mar 30, 2017

okay, for dataset P300 BNCI 003-2015, (2700 events)

before the PR, 1.2 second

Line #    Mem usage    Increment   Line Contents
================================================
    54  191.078 MiB    0.000 MiB   @profile
    55                             def my_func():
    56  191.078 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    57  218.562 MiB   27.484 MiB       xd.fit(ep)

with the new version , 0.37 second

Line #    Mem usage    Increment   Line Contents
================================================
    55  188.859 MiB    0.000 MiB   @profile
    56                             def my_func():
    57  188.859 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    58  197.691 MiB    8.832 MiB       xd.fit(ep)

with another dataset (~5000 events)

before : 8 seconds

Line #    Mem usage    Increment   Line Contents
================================================
    55 1399.711 MiB    0.000 MiB   @profile
    56                             def my_func():
    57 1399.711 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    58 1410.000 MiB   10.289 MiB       xd.fit(ep)

after : 2.6 second

Line #    Mem usage    Increment   Line Contents
================================================
    55 1399.578 MiB    0.000 MiB   @profile
    56                             def my_func():
    57 1399.578 MiB    0.000 MiB       xd = Xdawn(correct_overlap=True)
    58 1456.324 MiB   56.746 MiB       xd.fit(ep)
@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Mar 30, 2017

Collaborator

I made a small numerical comparison.
I don't have the same coefficients for the filters, but it looks close enough

image

However, this leads to a small but consistent decrease in decoding performances.

image

I found a difference in the toeplitz matrices, i'm looking at this in details

Collaborator

alexandrebarachant commented Mar 30, 2017

I made a small numerical comparison.
I don't have the same coefficients for the filters, but it looks close enough

image

However, this leads to a small but consistent decrease in decoding performances.

image

I found a difference in the toeplitz matrices, i'm looking at this in details

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Mar 30, 2017

Contributor

Hm. That is troubling.

Contributor

jona-sassenhagen commented Mar 30, 2017

Hm. That is troubling.

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Mar 31, 2017

Contributor

@alexandrebarachant consider

import numpy as np
from scipy import sparse, linalg

tpoints = 15
winlen = 10
events = np.zeros(tpoints)
idx = np.array([3, 7])
events[idx] = 1

print("xdawn master\n", linalg.toeplitz(events[0:winlen], events))

shape = (winlen, tpoints)
rect_evs = np.ones((len(idx), winlen))
print("\nlin_reg_raw master\n", sparse.dia_matrix((rect_evs, idx), shape=shape).toarray())
print("\n...\n", sparse.diags(rect_evs, idx, shape=shape).toarray())

Isn't only the last one correct? Or did I missimplement any ..?
sparse.diag_matrix is "missing" trials (if the beginning and end of the array are too small), linalg.toeplitz is "adding" trials.
I think it should be sparse.diags?.. What do you think?

Running this on plot_decoding_xdawn with overlap forced on, I actually get better slightly better results for the sparse ones (f1=64 vs. 61).

Contributor

jona-sassenhagen commented Mar 31, 2017

@alexandrebarachant consider

import numpy as np
from scipy import sparse, linalg

tpoints = 15
winlen = 10
events = np.zeros(tpoints)
idx = np.array([3, 7])
events[idx] = 1

print("xdawn master\n", linalg.toeplitz(events[0:winlen], events))

shape = (winlen, tpoints)
rect_evs = np.ones((len(idx), winlen))
print("\nlin_reg_raw master\n", sparse.dia_matrix((rect_evs, idx), shape=shape).toarray())
print("\n...\n", sparse.diags(rect_evs, idx, shape=shape).toarray())

Isn't only the last one correct? Or did I missimplement any ..?
sparse.diag_matrix is "missing" trials (if the beginning and end of the array are too small), linalg.toeplitz is "adding" trials.
I think it should be sparse.diags?.. What do you think?

Running this on plot_decoding_xdawn with overlap forced on, I actually get better slightly better results for the sparse ones (f1=64 vs. 61).

@alexandrebarachant

This comment has been minimized.

Show comment
Hide comment
@alexandrebarachant

alexandrebarachant Mar 31, 2017

Collaborator

Yes, i noticed that the Xdawn toeplitz isn't correct, it looks like it is adding some extra events at the beginning of the recording.

However, this should not impact the performances in such a big way, and i'm actually thinking there is another bug somewhere else.
I'm looking into it.

Collaborator

alexandrebarachant commented Mar 31, 2017

Yes, i noticed that the Xdawn toeplitz isn't correct, it looks like it is adding some extra events at the beginning of the recording.

However, this should not impact the performances in such a big way, and i'm actually thinking there is another bug somewhere else.
I'm looking into it.

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Apr 10, 2017

Contributor

@alexandrebarachant , have you found something ..?

Contributor

jona-sassenhagen commented Apr 10, 2017

@alexandrebarachant , have you found something ..?

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Apr 18, 2017

Member

Also related: #2782

Member

larsoner commented Apr 18, 2017

Also related: #2782

@agramfort agramfort changed the title from MRG: refactor XDawn to use rERP to [MRG] refactor XDawn to use rERP Jun 25, 2017

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Aug 4, 2017

Member

Anyone have time for this in the next couple of weeks? If not I'll remove the 0.15 milestone

Member

larsoner commented Aug 4, 2017

Anyone have time for this in the next couple of weeks? If not I'll remove the 0.15 milestone

@larsoner larsoner removed this from the 0.15 milestone Aug 4, 2017

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Aug 5, 2017

Member
Member

kingjr commented Aug 5, 2017

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Aug 5, 2017

Contributor

The problem is the discrepancy between rERP and XDawn. See @alexandrebarachant s comments from March 30th. I couldn't think of a reason for why they might differ.

Contributor

jona-sassenhagen commented Aug 5, 2017

The problem is the discrepancy between rERP and XDawn. See @alexandrebarachant s comments from March 30th. I couldn't think of a reason for why they might differ.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Aug 5, 2017

Member

Somebody probably needs to take the time to dig into the computations and compare step by step (if possible).

Member

larsoner commented Aug 5, 2017

Somebody probably needs to take the time to dig into the computations and compare step by step (if possible).

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Aug 5, 2017

Contributor
Contributor

jona-sassenhagen commented Aug 5, 2017

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Aug 5, 2017

Member

Done

Member

larsoner commented Aug 5, 2017

Done

@kingjr

This comment has been minimized.

Show comment
Hide comment
@kingjr

kingjr Aug 6, 2017

Member

@alexandrebarachant any chance you could give it a look?

Member

kingjr commented Aug 6, 2017

@alexandrebarachant any chance you could give it a look?

@larsoner larsoner changed the title from [MRG] refactor XDawn to use rERP to WIP: refactor XDawn to use rERP Aug 11, 2017

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Aug 11, 2017

Member

Re-titled to WIP until the issues are settled

Member

larsoner commented Aug 11, 2017

Re-titled to WIP until the issues are settled

@choldgraf

This comment has been minimized.

Show comment
Hide comment
@choldgraf

choldgraf Aug 11, 2017

Contributor

Who the heck is this larsoner person and why does he have commit rights?!

Contributor

choldgraf commented Aug 11, 2017

Who the heck is this larsoner person and why does he have commit rights?!

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Jul 30, 2018

Contributor

@larsoner looks like this actually provided substantial speedup over XDawn as-is, but may lead to somewhat different results.

Contributor

jona-sassenhagen commented Jul 30, 2018

@larsoner looks like this actually provided substantial speedup over XDawn as-is, but may lead to somewhat different results.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Jul 30, 2018

Member

Do we know why the results might be somewhat different?

Member

larsoner commented Jul 30, 2018

Do we know why the results might be somewhat different?

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Jul 30, 2018

Contributor

No :(

Contributor

jona-sassenhagen commented Jul 30, 2018

No :(

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Jul 30, 2018

Member

I think until someone figures that out we can't really move forward. Do you agree?

In principle at some point these operations should lead to the same call to scipy.linalg, but maybe they do not. IIRC Xdawn uses scipy.linalg.eigh. I would try to get the inputs to both functions to be the same (and simple / understandable, ideally with impluses mostly!) and see where they diverge.

Member

larsoner commented Jul 30, 2018

I think until someone figures that out we can't really move forward. Do you agree?

In principle at some point these operations should lead to the same call to scipy.linalg, but maybe they do not. IIRC Xdawn uses scipy.linalg.eigh. I would try to get the inputs to both functions to be the same (and simple / understandable, ideally with impluses mostly!) and see where they diverge.

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Jul 30, 2018

Contributor

I was hoping to bait you into figuring out what's going on here, cause I haven't made progress in literally years ...

Remember though, you have two stages: the construction of the diagonal matrix (where currently, BOTH routines do something wrong), and then the linear regression. (I think XDawn actually uses pinv? At least it used to.)

But note that XDawn as implemented in MNE is doing a potentially inefficient de-and re-constructing of the continuous data into epochs that could be skipped.

Contributor

jona-sassenhagen commented Jul 30, 2018

I was hoping to bait you into figuring out what's going on here, cause I haven't made progress in literally years ...

Remember though, you have two stages: the construction of the diagonal matrix (where currently, BOTH routines do something wrong), and then the linear regression. (I think XDawn actually uses pinv? At least it used to.)

But note that XDawn as implemented in MNE is doing a potentially inefficient de-and re-constructing of the continuous data into epochs that could be skipped.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Jul 30, 2018

Member

There is a linalg.pinv in the _least_square_evoked step, I was thinking of a subsequent call to linalg.eigh... So should _least_square_evoked be used by both / do the same things?

Member

larsoner commented Jul 30, 2018

There is a linalg.pinv in the _least_square_evoked step, I was thinking of a subsequent call to linalg.eigh... So should _least_square_evoked be used by both / do the same things?

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Jul 30, 2018

Contributor

Ah yes, right. There is a stage where the waveform is estimated/where the coefficients for the delayed predictor matrix are calculated, and that should be identical. That is where _linear_regression_raw and the encoding module stop (right?), while XDawn goes a step further.

Contributor

jona-sassenhagen commented Jul 30, 2018

Ah yes, right. There is a stage where the waveform is estimated/where the coefficients for the delayed predictor matrix are calculated, and that should be identical. That is where _linear_regression_raw and the encoding module stop (right?), while XDawn goes a step further.

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Jul 30, 2018

Member

Okay so there is a problem that there is some inconsistency.

Does it make sense to change this PR to one that does not change the output of either function/method but does:

  1. Refactor into a shared function / functions that should do the same thing
  2. In these functions, have a mode='xdawn' | 'rerp' where it does different calculations

This would at least facilitate eventually making those calculations match, and be a concrete intermediate step we could implement and merge.

This is a prereq for eventually getting equivalence, and doing this first would make reviewing the fix-errors-and-merge bits easier.

Member

larsoner commented Jul 30, 2018

Okay so there is a problem that there is some inconsistency.

Does it make sense to change this PR to one that does not change the output of either function/method but does:

  1. Refactor into a shared function / functions that should do the same thing
  2. In these functions, have a mode='xdawn' | 'rerp' where it does different calculations

This would at least facilitate eventually making those calculations match, and be a concrete intermediate step we could implement and merge.

This is a prereq for eventually getting equivalence, and doing this first would make reviewing the fix-errors-and-merge bits easier.

@jona-sassenhagen

This comment has been minimized.

Show comment
Hide comment
@jona-sassenhagen

jona-sassenhagen Jul 30, 2018

Contributor

Sure -- but, a shared function for which part, both constructing the design matrix and the solving? Or two shared functions?

Contributor

jona-sassenhagen commented Jul 30, 2018

Sure -- but, a shared function for which part, both constructing the design matrix and the solving? Or two shared functions?

@larsoner

This comment has been minimized.

Show comment
Hide comment
@larsoner

larsoner Jul 30, 2018

Member

Maybe separate ones, since the design matrix will be different for Xdawn because it virtually reconstructs the raw? Whatever you think makes sense, really, based on the code.

Also, if you have test cases that show that rERP and/or Xdawn are broken currently (as you state above), you can add them in this intermediate PR as xfail for now

Member

larsoner commented Jul 30, 2018

Maybe separate ones, since the design matrix will be different for Xdawn because it virtually reconstructs the raw? Whatever you think makes sense, really, based on the code.

Also, if you have test cases that show that rERP and/or Xdawn are broken currently (as you state above), you can add them in this intermediate PR as xfail for now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment