Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Denoising options for preprocessing #2112

Closed
teonbrooks opened this issue May 18, 2015 · 51 comments
Closed

ENH: Denoising options for preprocessing #2112

teonbrooks opened this issue May 18, 2015 · 51 comments
Milestone

Comments

@teonbrooks
Copy link
Member

It would be nice if we had some denoising options available for preprocessing data. A few options that I was thinking of tsPCA (Cheveigné, 2007), and CALM (Adachi, 2001), which can be a special case of tsPCA.

there's a package on git (https://github.com/pealco/python-meg-denoise) that contains an implementation, as well as source code (http://www.isr.umd.edu/Labs/CSSL/simonlab/Denoising.html).

use case for those whose lab implement these tools in Matlab, or private software, those don't have license for maxfilter but would like cleaner data.

@teonbrooks teonbrooks added this to the 0.10 milestone May 18, 2015
@agramfort
Copy link
Member

agramfort commented May 18, 2015 via email

@dengemann dengemann modified the milestones: 0.11, 0.10 Sep 3, 2015
@larsoner larsoner modified the milestones: 0.12, 0.11 Nov 25, 2015
@kingjr
Copy link
Member

kingjr commented Nov 28, 2015

I'll follow up on my question regarding denoising external sources here, because it's likely to generate a long discussion perhaps even a PR..

  • TSPCA
    (Time Shift PCA, sorry @choldgraf, it was indeed a spelling mistake)

So thanks @teonlamont for the refs and links. I've tried the git package, but without success: neither for raw nor epoched data. The code is pretty messy, and I keep encountering data shape issues (see my gist here). @teonlamont have you successfully preprocess your data with these scripts?

Before re-coding the entire script, I thought of checking up how the TSPCA actually cleans my data, using some in-lab wrapper (['of wrapper ' for ii in range(5)]). Here is a gist which condensed the 600 lines of this wrap-wrapper, hopefully without a mistake.

The benefit of this procedure is far from being obvious to me. The evoked responses (high versus low tones) are basically the same, the decoding performance is nearly identical (but it's multivariate, so it could makes sense...).
tspca

Similar results were obtained when the components were not fitted with an empty room but with the subjects data.

  • SSS

I've tried to use the SSS implementation of mne-python but got stuck pretty quickly. I am using a KIT, which is composed of i) axial gradiometers, that are actually considered as magnetometers by mne-python (which makes sense) ii) and 3 magnetometers (x,y,z) further away from the head to be used as a reference). However, the maxwell_filter function picks grads only, and thus fails to find the sensors. Does anyone know whether I could apply a quick hack to circumvent this, and compare the results to the TSPCA?

from mne.io import read_raw_kit
from mne.preprocessing.maxwell import maxwell_filter
bad_sensors = [20, 40, 64, 112, 115, 152]
fname = os.path.join(data_path, fname_subject)
raw = read_raw_kit(fname, preload=True)
events = mne.find_events(raw)
for bad in bad_sensors:
    raw.info['bads'] += ['MEG %.3i' % bad]
raw_sss = maxwell_filter(raw)

=> picking error

I'd be happy to share empty room, single tone and dual tone data from a KIT if that helps.

@agramfort
Copy link
Member

agramfort commented Nov 28, 2015 via email

@kingjr
Copy link
Member

kingjr commented Nov 28, 2015

regarding SSS, it has not been tested on non neuromag systems and so far it
is not clear how to handle reference channels.

I meant disregarding the ref and trying to apply the sss on the axial gradiometers. Would this make sense ?

@agramfort
Copy link
Member

maybe @staulu can comment

@staulu
Copy link

staulu commented Nov 29, 2015

It makes sense to apply SSS on the axial gradiometers after leaving out the reference sensors. However, if the reference sensor signals are available in the data without the coupling to the axial gradiometers, it would be even better to use all channels in SSS (axial gradiometers + refs). By coupling I mean extrapolation and subtraction of the reference-based interference, which is the way the reference channels are traditionally used. Just to double check: How many axial gradiometers are there in the KIT system?

@kingjr
Copy link
Member

kingjr commented Nov 29, 2015

Thanks. In our KIT, there are 157 whole-head axial gradiometers and 3 (xyz)
reference magnetometers.

On 29 November 2015 at 00:23, Samu Taulu notifications@github.com wrote:

It makes sense to apply SSS on the axial gradiometers after leaving out
the reference sensors. However, if the reference sensor signals are
available in the data without the coupling to the axial gradiometers, it
would be even better to use all channels in SSS (axial gradiometers +
refs). By coupling I mean extrapolation and subtraction of the
reference-based interference, which is the way the reference channels are
traditionally used. Just to double check: How many axial gradiometers are
there in the KIT system?


Reply to this email directly or view it on GitHub
#2112 (comment)
.

@staulu
Copy link

staulu commented Dec 1, 2015

Thanks for the info. Since I am not sure how accurately the KIT sensors are calibrated and given the relatively small number of sensors (<200), I would use tSSS, which is less sensitive to calibration errors than the plain SSS, and I would also test the performance with internal expansion order (L_in) values of 6, 7, and 8. Do we know if the raw signals are already compensated for based on the 3 reference magnetometers?

@kingjr
Copy link
Member

kingjr commented Dec 1, 2015

Do we know if the raw signals are already compensated for based on the 3
reference magnetometers?

My understanding is that no, the raw data is not compensated based on
these magnetometers but I ll check. However it is compensated online by an
active shielding using 6 coils placed around the MEG room.

@Eric89GXL is there a tSSS implemented in MNE ?

On Tuesday, 1 December 2015, Samu Taulu notifications@github.com wrote:

Thanks for the info. Since I am not sure how accurately the KIT sensors
are calibrated and given the relatively small number of sensors (<200), I
would use tSSS, which is less sensitive to calibration errors than the
plain SSS, and I would also test the performance with internal expansion
order (L_in) values of 6, 7, and 8. Do we know if the raw signals are
already compensated for based on the 3 reference magnetometers?


Reply to this email directly or view it on GitHub
#2112 (comment)
.

@larsoner
Copy link
Member

larsoner commented Dec 1, 2015

@kingjr yes, tSSS is now implemented in mne-python. @staulu I don't know much about these systems, but I do know that the FIF file contains information about different forms of compensation matrices that can be applied to the data. So I suspect we can detect whether or not compensation has been done. But it would be good to get input from people who really know (@kingjr if you are one of those people or can ping someone who is it would be very helpful here!).

@kingjr
Copy link
Member

kingjr commented Dec 1, 2015

Thanks to our meg engineer Jeffrey Walker, here are some clarifications from the KIT team.

From: ADACHI Yoshiaki adachi@ael.kanazawa-it.ac.jp
Date: Tuesday, December 1, 2015
Subject: Meg kit
To: jeffrey.walker@nyu.edu, oyama@neptune.kanazawa-it.ac.jp

Hi Jeff,

Actually, there is a way for the online compensation using the signals
from the reference magnetometers. It is tentatively called 'direct open loop
in-phase component input (DOLPHIN),' The beta version of DOLPHIN was
implemented to our magnetospinograpm system in Tokyo and tested recently.
It works well and 90% of the low band noise can be removed according to our
test. Please see the attached paper.

It is possible to implement DOLPHIN to MEG systems but the electronics must
support direct feedback from the reference sensors.

The SQUID electronics of the MEG system in Abu Dhabi is relatively new
and DOLPHIN could be implemented even though some modification were necessary.
However, we need to replace all the SQUID electronics if we implement DOLPHIN
to the MEG system in New York because their electronics do not support
direct feedback from the reference magnetometers.

@teonbrooks
Copy link
Member Author

cute acronym, they are really trying to keep up the ocean metaphors ;) so from my reading, and from my experience, there is currently no online compensation done to the MEG data in New York.

@kingjr, I wasn't able to get it to work (but I also didn't invest too much time in it). i wonder if it would be easier to go from the MATLAB code to a Python implementation than the currently defunct meg-denoise repo. The nice thing about about tsPCA is that when it's reduced down to no shifts, it is equivalent to the CALM procedure, so it encompasses multiple denoising techniques.

unfortunately, I can only be a cheerleader on this issue 🤘👏

@larsoner
Copy link
Member

larsoner commented Dec 1, 2015 via email

@kingjr
Copy link
Member

kingjr commented Dec 2, 2015

i wonder if it would be easier to go from the MATLAB code to a Python implementation than the currently defunct meg-denoise repo.

Perhaps, but from what I've seen, the python repo is pretty much a direct translation of chevigne's toolbox. The main issue with the latter is that there are a lot of peripheral functions to normalize columns, demean etc, that are not systematic (these functions typically check the input size and apply a different function accordingly). So I think the best is probably to start from the equations directly.

From my end, I would first need to know whether there's an actual benefit in applying these denoising techniques, as compared to the tSSS. If there is, I'll implement it.

@kingjr is there a way to tell if DOLPHIN has been performed on the data during acquisition? If it hasn't been, is it then safe to assume data has not been compensated? Thanks for looking into this stuff.

You mean, in the header? I haven't any DOLPHIN data, so it's hard to check but I'll ask. In my specific case, I know our squids are not compatible with this online denoising procedure.

@larsoner
Copy link
Member

larsoner commented Dec 2, 2015

From my end, I would first need to know whether there's an actual benefit in applying these denoising techniques, as compared to the tSSS. If there is, I'll implement it.

It would be great if we could get SSS (and trivially thereafter tSSS) working. We want to release maxwell_filter in a couple of weeks. Do you have time to work on testing it on your data? I'm happy to help fix bugs when they come up.

You mean, in the header? I haven't any DOLPHIN data, so it's hard to check but I'll ask. In my specific case, I know our squids are not compatible with this online denoising procedure.

Yeah, it would be best if we could tell from the header information that some form of online compensation has been applied. That way we know whether or not we can use SSS. It sounds like only one site is using it currently so it might not be a big deal currently, but it would be good to get support for it as early as possible.

@kingjr
Copy link
Member

kingjr commented Dec 2, 2015

Nothing in the headers apparently. Here are their reply:

DOLPHIN is a method to reduce the low band noise before the digital recording
and it has just been tested at one of our research site in Tokyo. The file
header doesn't have the information to tell whether DOLPHIN is applied or not
because the DOLPHIN is not 'officially' supported yet. Anyway, both of the MEG systems in New York and Abu Dhabi don't have the function for the DOLPHIN yet, unfortunately. To implement DOLPHIN to your MEG systems, we need to modify the SQUID electronics. Please check my previous e-mail sent to Jeff. The algorithm of the DOLPHIN is basically same as the continuously adjusted least- squares method (CALM) but the window to calculate weight coefficients is fixed
and sufficiently long, say 5 minutes or so. The effect of the DOLPHIN was similar to the CALM but we can increase the gain upon the digital recording because the low band noise is removed before amplifying. If you would like to see some 'DOLPHINed' data, I will try to send them to you. Best, Yoshiaki Adachi

I'm stuck at the SSS transform, the data format is not easily compatible with the current script. I'll try to hack it this weekend, and ll report you what I get.

Here

@larsoner
Copy link
Member

larsoner commented Dec 4, 2015

Can you reproduce with any of the files in the testing dataset?

@larsoner
Copy link
Member

larsoner commented Dec 4, 2015

That's what we'll probably want to use to make tests eventually. It is hopefully something minor with the picking functions.

@larsoner
Copy link
Member

@kingjr any way you could have a look by Monday or Tuesday and hopefully reproduce with test data, or otherwise share a failing file? That way I can have a look. It would be nice to work out whatever bug before releasing 0.11, which we hope to do at the end of next week.

@kingjr
Copy link
Member

kingjr commented Dec 14, 2015

OK today will be a bit difficult for me but I'll try to give it a try
tomorrow.

On Saturday, 12 December 2015, Eric Larson notifications@github.com wrote:

@kingjr https://github.com/kingjr any way you could have a look by
Monday or Tuesday and hopefully reproduce with test data, or otherwise
share a failing file? That way I can have a look. It would be nice to work
out whatever bug before releasing 0.11, which we hope to do at the end of
next week.


Reply to this email directly or view it on GitHub
#2112 (comment)
.

@kingjr
Copy link
Member

kingjr commented Dec 15, 2015

@Eric89GXL I had to make several changes to prevent maxwell_filter from crashing.

  • @teonlamont In read_raw_kit, we need to specify the mrk, hsp and elp to be able to use the SSS. My hsp and elp files are apparently incompatible with the current script. I had to comment out the initial lines so as to only leave out the data arrays (remove headers and consider '//' as a comment).

minor: I think the mrk, hsp and elp triplets are not very explicit; I would push for enhancing the doc on this aspect

  • In maxwell_filter I had to manually enable the ignore_ref and set elekta_def to False, else it was crashing. See kingjr@ee5ec68

I'm not entirely sure whether i) my parameters are set in the optimal place and ii) whether they actually are valid.

  • I'm not really convinced by what I'm getting. Here is the ERF of a tone without the SSS; we expect the N100 generated from A1, so ~ two symmetrical dipoles on side of the front channels around 100 ms after stim onet. Which is what we roughly get before SSS:

figure_1

However, here is what we obtain after SSS:

figure_2

It looks a bit crazy to me. no?

  • @Eric89GXL Is there a an available KIT so that I can make a test script? Else, here's an example on how I used it with my brain response.
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from mne.io import read_raw_kit
from mne.preprocessing.maxwell import maxwell_filter
from mne import Epochs, find_events

data_path = '/media/DATA/Pro/Projects/NewYork/audio_wm/data/'
subject_dir = 'R1022_Audio_Sequence_11.25.15'
# file with coils position in MEG reference.
mrk = op.join(data_path, subject_dir, 'R1022_Marker7_11.25.15.sqd')  # HPI
# file with head shape. Need to comment out every non-3D array
# (especially `'//'`)
hsp = op.join(data_path, subject_dir, 'R1022_11.25.15_test.hsp')
# file with coil position in head-reference. Need to comment out every non-3D
# array (especially `'//'`)
elp = op.join(data_path, subject_dir, 'R1022_11.25.15_test.elp')
# MEG recordings
fname = op.join(data_path, subject_dir, 'R1022_2Tones_11.25.15.sqd')

# Read data
raw = read_raw_kit(fname, mrk=mrk, elp=elp, hsp=hsp, preload=True)
# Manually indicate bad sensors
bad_sensors = [20, 40, 64, 112, 115, 152]
for bad in bad_sensors:
    raw.info['bads'] += ['MEG %.3i' % bad]
# SSS tranform
raw_sss = maxwell_filter(raw, acquisition_system='kit')

# Test topography
events = find_events(raw_sss)
for this_raw in [raw, raw_sss]:
    this_raw.filter(.7, 30)
    epochs = Epochs(this_raw, events=events, event_id=dict(high=1, low=2),
                    tmin=0, tmax=.300, preload=True, baseline=None)
    evoked = epochs.average()
    evoked.data = np.median(epochs._data, axis=0)
    evoked.plot_topomap(contours=False, show=False)
plt.show()

@larsoner
Copy link
Member

Things are never easy, are they? :) I agree those dipolar patterns are not so good in the Maxwell filtering case.

@staulu you were thinking we should include reference sensors in SSS decomposition, right?

You shouldn't need to pass the acquisition system -- we can improve the SSS code to try using Elekta defs, and fall back to mne defs if they're not available. That should be easy enough to implement.

I can give it a shot. I'll make some some basic test from these files:

https://github.com/mne-tools/mne-python/tree/master/mne/io/kit/tests/data

We can't really validate the quality of the output using those files, but we can at least make some tests that ensure the algorithm runs on those data. I think we'll need to get your dataset working better to start validating the algorithm. Have you tried tSSS at all?

@larsoner
Copy link
Member

Also, how many components were kept due to regularization (you can see it with verbose=True)? It's possible that the regularization is too harsh / not tuned for other systems.

@teonbrooks
Copy link
Member Author

@kingjr the elp and hsp should not be *.elp and *.hsp in the reader. They should be the direct export from FastScan as text files (sorry for the confusing nomenclature). We don't currently support the legacy format (see #1001, although this pr can be resurrected if you want, was getting slight numerical accuracy problems).
if you use those files, you're going to do an improper head rotation since *.elp and *.hsp have a different transformation than expected.

@larsoner
Copy link
Member

@kingjr looks like when I run the test file I get only 30 components kept due to regularization (!). Try using regularize=None and hopefully it'll look better for you. I'll update the maxwell_filter documentation.

@kingjr
Copy link
Member

kingjr commented Dec 15, 2015

Have you tried tSSS at all?

No, but I have only one run in this case, so I guess it won't change much, or will it? Can you point me the script that runs tSSS?

how many components were kept due to regularization

        Resulting information: 228.1 bits/sample (98.4% of peak 231.8)
        Using 46/80 inside and 15/15 outside harmonic components

@kingjr the elp and hsp should not be *.elp and *.hsp in the reader.

Ah ok. Well, that changes everything, except that it's still crappy :)

figure_2

Try using regularize=None and hopefully it'll look better for you.

This is what I get with no regularization with the correct hsp and elp files. Still doesn't look correct

figure_2

Not sure what to do from here, but thanks a lot for your help.

@larsoner
Copy link
Member

Hmm... I wonder if there is still some problem with the transforms. Have you successfully computed a forward/inverse for this dataset? The forward code and Maxwell filtering code use the same set of transforms, so if the forward works, it suggests the transforms are not the problem.

@kingjr
Copy link
Member

kingjr commented Dec 15, 2015

Hmm... I wonder if there is still some problem with the transforms. Have you successfully computed a forward/inverse for this dataset? The forward code and Maxwell filtering code use the same set of transforms, so if the forward works, it suggests the transforms are not the problem.

No I only have a pilot data for now. I have an old MRI of myself, so I could try segment it and all, but this will take a while. Is there a quicker way of checking this up?

@larsoner
Copy link
Member

Not that I can think of offhand :( I am about to open a PR that contains some minor fixes that might make it work better for you, though.

@staulu
Copy link

staulu commented Dec 15, 2015

Just a couple of thoughts: 1) Would it be possible to check the condition number of the normalized SSS basis before transforming the data from sensors to multipole moments? Also, 2) it just occurred to me that since this system only has gradiometers (when references are omitted), the first three external basis vectors will be zero, or some random vectors due to numerical inaccuracies. This is because those vectors correspond to homogeneous fields, which ideal gradiometers are insensitive to. So, the three homogeneous vectors should be excluded from the model, otherwise the SSS matrix may be quite ill-conditioned.

It would be beneficial to include the reference sensors in the SSS model, but I don't think it would fix the reported problem. Maybe the above points help us find the cause of the problem.

@larsoner
Copy link
Member

  1. Would it be possible to check the condition number of the normalized SSS basis before transforming the data from sensors to multipole moments?

It would be straightforward to add this. Are you thinking we should throw a warning if it's too large (e.g., > 1000)? We could also throw an error I suppose, to be really safe. Actually condition_check='error' | 'warning' | 'ignore' option might be best. The question would be whether error or warning is the better default.

  1. it just occurred to me that since this system only has gradiometers (when references are omitted), the first three external basis vectors will be zero, or some random vectors due to numerical inaccuracies. This is because those vectors correspond to homogeneous fields, which ideal gradiometers are insensitive to. So, the three homogeneous vectors should be excluded from the model, otherwise the SSS matrix may be quite ill-conditioned.

Do you think the safest approach is to numerically check for zero-ness (e.g., check if norm is greater than some reasonable default), or to infer it based on a n_magnetometer > 0 check? Either one is also straightforward to add.

@kingjr
Copy link
Member

kingjr commented Dec 15, 2015

Any chance one of you could guide me in the maxwell_filter script to test
these two points? I've tried to manually set the first three external
vectors to 0 but it generates errors, and it s clear that I won't master
the code before a while.

@EricGXL89 your PR works fine with my kit data (ie it doesn't crash but
produces a similar weird result)

On Tuesday, 15 December 2015, Samu Taulu notifications@github.com wrote:

Just a couple of thoughts: 1) Would it be possible to check the condition
number of the normalized SSS basis before transforming the data from
sensors to multipole moments? Also, 2) it just occurred to me that since
this system only has gradiometers (when references are omitted), the first
three external basis vectors will be zero, or some random vectors due to
numerical inaccuracies. This is because those vectors correspond to
homogeneous fields, which ideal gradiometers are insensitive to. So, the
three homogeneous vectors should be excluded from the model, otherwise the
SSS matrix may be quite ill-conditioned.

It would be beneficial to include the reference sensors in the SSS model,
but I don't think it would fix the reported problem. Maybe the above points
help us find the cause of the problem.


Reply to this email directly or view it on GitHub
#2112 (comment)
.

@larsoner
Copy link
Member

@EricGXL89 your PR works fine with my kit data (ie it doesn't crash but
produces a similar weird result)

Rats.

Any chance one of you could guide me in the maxwell_filter script to test
these two points?

I'll do better and just implement the changes. Give me a few minutes (hope I can get to it).

@larsoner
Copy link
Member

@staulu added a check (let me know if 1000. is a reasonable value or if something else would be better):

RuntimeError: Matrix is badly conditioned: 768670824 >= 1000

So yes, it looks like the condition is bad. Now to fix it as recommended...

@staulu
Copy link

staulu commented Dec 15, 2015

Yes, I think 1000 is a reasonable limit in case it's based on the basis after dropping of the poorly detectable columns.

As for the zero vectors, it's best to check if the system consists of ideal gradiometers only. Note that if we have information about gradiometer imbalance, modeled as point-like magnetometer contribution, then the homogeneous components should be included even for a system with gradiometers only.

@larsoner
Copy link
Member

@staulu okay, I'll go ahead and implement it that way. I implemented it the norm-comparison way first because it was simpler code-wise, and tests work for it. Swapping in and checking the grad-check version should be doable given the tests that exist now.

BTW, the condition is often more than 1000 if no regularization is performed.

@larsoner
Copy link
Member

Using the coil type worked for BTi and CTF, but not KIT, but I think it's a bug in our channel picking functions.

@kingjr
Copy link
Member

kingjr commented Dec 16, 2015

Ok, this has made some changes: it's getting closer?..

figure_2

@kingjr
Copy link
Member

kingjr commented Dec 16, 2015

Ah no, it just with the regularization that is does this. Else, it's like before.

@kingjr
Copy link
Member

kingjr commented Jan 21, 2016

I have compared several denoising options

  • for noisy KIT data: acquired at NYC with 157 sensors, n=2 ('jr', 'sg', in the report)
  • @teonlamont 's Abu Dhabi clean KIT data: more channels, cleaner environment, n=3, 'teon*'

All subjects performed a passive two-tones protocol: ~ 3 minutes of pitch presentation, no task. We expect two lateral dipoles generated from the auditory cortices.

The full report is here: https://www.dropbox.com/s/2z2i6emtbb25htb/report.html?dl=0 [EDIT: wrong link].

  • LS indicate a least square correction fitted from ref channels, the python code's available here. ('1'=applied, '0'=not applied). [Bad channels were removed for jr and sg, but not for teon]
  • SS for signal space is either none (no correction), ssp for signal space projection or sss for the maxwell filter. [no bad channels exclusion, otherwise the KIT layout breaks]

In brief,

  • Abu Dhabi KIT's data is cleaned and additional method didn't change anything. I suspect they actually make reference projection at acquisition time.
  • the least square fit on ref channels seems to be more efficient that the others methods. It reduces the variance before the auditory ERF.
    e.g. : no correction
    jr_ls_0-ssp_none
    ls correction:
    jr_ls_1-ssp_none
  • LS reduced the variance after the early auditory ERFs, but I think the topo looks artefactual, so it should be fine.
    e.g. without ls correction
    sg_ls_0-ssp_none
    e.g. with ls correction
    sg_ls_1-ssp_none
  • Unsurprisingly, LS is very sensitive to bad channels. It is thus easy to introduce crazy components if we do not reject bad trials:
    before
    teon3_ls_0-ssp_none
    after
    teon3_ls_1-ssp_none

Consequently, I am wondering whether we could have a regularlized version to automatically discard bad chans? But this may breaks the linearity right?

  • Finally, the SSS and SSP didn't change much at all. I have also tested TSPCA on one subject (I didn't manage to translate it in python, too long), and didn't see much difference either.
  • I haven't tried to fit these correction from an empty room.

I'm now onto checking the fwd and inv models to see whether we get what we expect; that would provides an additional check that the maxwell_filter correct.

@jasmainak
Copy link
Member

sorry to go off on a tangent, but I like that code snippet thing in mne-report. Any chance you could share that code?

@kingjr
Copy link
Member

kingjr commented Jan 21, 2016

@jasmainak You'll find it very hackish, I made a class to simplify the MNE-report which detects whether you're running the caller is ipython or an actual script and extract its code. The class is also used to not initialize the save dir, and allows updating to dropbox; feel free to incorporate some of the concepts to mne reports.

@dengemann
Copy link
Member

Off-topic: looks like there are some potentially interesting pieces for
meeg-preprocessing in this report class.

On Thu, Jan 21, 2016 at 6:26 PM, Jean-Rémi KING notifications@github.com
wrote:

@jasmainak https://github.com/jasmainak You'll find it very hackish, I
made a class
https://github.com/kingjr/jr-tools/blob/master/jr/utils.py#L47 to
simplify the MNE-report which detects whether you're running the caller is
ipython or an actual script and extract its code. The class is also used to
not initialize the save dir, and allows updating to dropbox; feel free to
incorporate some of the concepts to mne reports.


Reply to this email directly or view it on GitHub
#2112 (comment)
.

@dengemann
Copy link
Member

@kingjr am I missing something or are you only showing SSP vs LS?

@jona-sassenhagen
Copy link
Contributor

evoked.joint_plot is pretty nice isn't it? :)

@kingjr
Copy link
Member

kingjr commented Jan 21, 2016

@dengemann > Sorry, I had updated the wrong report; see https://www.dropbox.com/s/2z2i6emtbb25htb/report.html?dl=0. SSS didn't change anything, but perhaps I should tweak some params? I'm happy to put the data online.

Off-topic: looks like there are some potentially interesting pieces for meeg-preprocessing in this report class.

Feel free to copy, but I think you'll find too hackish.

@jona-sassenhagen yep, and it actually helps a lot :]

@kingjr
Copy link
Member

kingjr commented Jan 21, 2016

@Eric89GXL The forward and inverse model seem to be roughly ok on me (i.e. sound activates A1) :
auditory

@larsoner
Copy link
Member

Finally, the SSS and SSP didn't change much at all.

This is surprising to me. By eye it looks like SSS did nothing to the data. Is it easy to compute and report what raw.estimate_rank() says for the original raw data, SSP'ed data, SSS'ed data, and SSP + SSS'ed data?

Have you tried tSSS (e.g., st_duration=10. or st_duration=60.)?

@kingjr
Copy link
Member

kingjr commented Jan 22, 2016

Here are the results. Note that the rank is computed on the raw, and that the SSP is computed afterwards.

My conclusions:

Least square projection

LS should be applied before data filtering, otherwise it introduces strong oscillatory artefacts.

filter => LS
12 -filtered- ls rank150_12 -filtered- ls rank150

LS => filter
13 ls rank150 -filtered- _13 ls rank150 -filtered-

(t)SSS

  • doing filtering before or after sss doesn't change anything
  • I'm not sure whether filtering should be done before or after tSSS (st_duration=10.), but they clearly lead to different results..

raw:
0 -filtered- rank150_0 -filtered- rank150

filter > tsss : the ERF components are stable (no wiggling around baseline)
4 -filtered- tsss rank74_4 -filtered- tsss rank74

tsss > filter : the ERF components are oscillatory
5 tsss rank74 -filtered- _5 tsss rank74 -filtered-

SSP

The way I used SSP didn't change anything at all, but I must say I don't understand this analysis very well; yet, I need to do my homework.

epochs = ssp.Epochs(raw, events, tmin=-.100, tmax=.700, baseline=None,
                                    preload=True, event_id=event_id)

Best combination?

IMO the best is LS => tsss => filter => epochs (no ssp): LS reduces the variance before sound onset. SSS seems to clean the data (high SNR) but completely changes the dipole locations. tSSS seems to reduce the signal power, but at least it keeps the correct topography.

LS alone
13 ls rank150 -filtered- _13 ls rank150 -filtered-

SSS alone
3 sss rank74 -filtered- _3 sss rank74 -filtered-

LS + SSS
15 ls sss rank74 -filtered- _15 ls sss rank74 -filtered-

LS + tSSS
17 ls tsss rank74 -filtered- _17 ls tsss rank74 -filtered-

@kingjr
Copy link
Member

kingjr commented Jan 22, 2016

@larsoner
Copy link
Member

@teonlamont I'm going to move this to mne-sandbox

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants