Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: CTF and proj (re)application #5342

Open
larsoner opened this issue Jul 16, 2018 · 11 comments
Open

BUG: CTF and proj (re)application #5342

larsoner opened this issue Jul 16, 2018 · 11 comments
Assignees

Comments

@larsoner
Copy link
Member

In principle we should be able to:

  1. apply compensation or proj
  2. compute forward on full set of channels
  3. pick channels in data (maybe dropping compensation channels)
  4. compute inverse without reapplying proj or compensation (BUG: Epochs constructor (re-creates) and re-applies proj even if proj is False if raw.proj is True #2310)

So we need to:

  1. Always keep info['comps'].
  2. Always keep info['projs'] complete with the set of channels that were used when they were applied, i.e., do not subselect channels if proj['active'].
  3. If projs are applied to raw, for epochs(..., picks=[1], proj=True), raise an error telling people to use proj=False because they should not reapply?
  4. Check for correspondence between comps and proj and data channels in the forward object during inverse application.

This non-reduction of the projs, however, would create some things to think about:

  • In master, if you apply projs, then subselect channels, then compute the inverse, the projection operator is reapplied (I think?). This is a bit weird because it means that there were actually two spatial operators that got applied, but only the last (with fewer channels) gets applied to the lead fields.
  • Currently we get the rank for inverse computation by subtracting the number of projectors, but this will only work if all channels are kept. We could still use n_proj -- we would just underestimate the rank, which is at least safer than overestimating it (and blowing up near-zero singular values).

This would probably take a lot of careful coding and testing to get right, but it should be doable. It seems like the cleanest option. @agramfort do you see any problems with this approach?

@bloyl
Copy link
Contributor

bloyl commented Sep 16, 2018

What do you think of breaking this up into multiple PRs?
with the first being to

Always keep info['comps']

and convert this RuntimeError to a RuntimeWarning

mne-python/mne/io/pick.py

Lines 403 to 408 in 4f7f515

if current_comp != 0:
raise RuntimeError(
'Compensation grade %d has been applied, but '
'compensation channels are missing: %s\n'
'Either remove compensation or pick compensation '
'channels' % (current_comp, comps_missing))

This would revert us to pre #5133 status, and you/we would have slightly more time to properly triage this?

@larsoner
Copy link
Member Author

Seems reasonable as an intermediate step to me

@larsoner
Copy link
Member Author

larsoner commented Oct 4, 2018

We should do this intermediate step for 0.17 and the full fix for 0.18. @bloyl do you have time in the next two weeks to do the RuntimeError->RuntimeWarning? If not I can do it.

@larsoner
Copy link
Member Author

larsoner commented Oct 4, 2018

... and just a note that whatever fix we have should incorporate the gist that will eventually show up in #5384 :)

@larsoner larsoner assigned larsoner and unassigned larsoner Oct 4, 2018
massich pushed a commit that referenced this issue Oct 12, 2018
…5581)

This is converts the RuntimeError that was raised when removing channels that were needed for applied compensation matrices to a Warning.

this is an initial PR in response to issue: #5342

Closes #5384.
@bloyl
Copy link
Contributor

bloyl commented Oct 15, 2018

similar to #5610

if possible we should check that covariance objects match the compensation grade of data they are being applied to.

@larsoner
Copy link
Member Author

Unfortunately for cov IIRC we don't store the actual info['chs'] field, just the ch_names so we can't check the grade

@agramfort
Copy link
Member

agramfort commented Oct 16, 2018 via email

@larsoner
Copy link
Member Author

I'll have to look deeper at the code and the output. For example if we always store the uncompensated one, then it can be compensated then picked on the fly. It's possible this already happens, I haven't checked

@dengemann
Copy link
Member

@agramfort @larsoner greets from Lyon. People run into this issue and it blocks their progress in adopting MNE :) essentially what we see is that you have to drop the ref channels in order to apply refs. We see that excluding the reference channels is hard-coded that https://github.com/mne-tools/mne-python/blob/master/mne/proj.py#L74. I'm wondering what is the right thing to do here ...

@agramfort
Copy link
Member

agramfort commented Oct 16, 2018 via email

@dengemann
Copy link
Member

We can see if we can repeat it on CTF sample data. What we see is that computing projs and applying them is broken if ref channels are present because the proj by default excludes them.

@larsoner larsoner modified the milestones: 0.17, 0.18 Nov 5, 2018
@larsoner larsoner removed this from the 0.18 milestone May 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants