Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to finish running PyPREP #28

Closed
barlehmann opened this issue Jun 5, 2020 · 33 comments · Fixed by #32
Closed

Unable to finish running PyPREP #28

barlehmann opened this issue Jun 5, 2020 · 33 comments · Fixed by #32

Comments

@barlehmann
Copy link

barlehmann commented Jun 5, 2020

Whenever I try to run PyPREP I get the following output which seems promising, however it stalls here and continues to act as if it is running and no other output comes out. Any assistance would be greatly appreciated. (Also I tried running this on both the present master version v20, and also the development version and the same results occur in each one.)

EDIT by @sappelhoff : 2020-06-06 --> I added backticks ````` to format the following as a codeblock. Please see the guidelines here @barlehmann

Extracting EDF parameters from /content/drive/My Drive/Colab Notebooks/Jacob5MeO2min.edf...
EDF file detected
Setting channel info structure...
Creating raw.info structure...
Reading 0 ... 200499  =      0.000 ...   801.996 secs...
<Info | 7 non-empty values
 bads: []
 ch_names: fp1, fp2, f3, f4, c3, c4, p3, p4, o1, o2, f7, f8, t3, t4, t5, ...
 chs: 19 EEG
 custom_ref_applied: False
 highpass: 0.0 Hz
 lowpass: 128.0 Hz
 meas_date: unspecified
 nchan: 19
 projs: []
 sfreq: 256.0 Hz
>
[<DigPoint |        LPA : (-82.5, -0.0, 0.0) mm     : head frame>, <DigPoint |     Nasion : (0.0, 102.7, 0.0) mm      : head frame>, <DigPoint |        RPA : (82.2, 0.0, 0.0) mm       : head frame>, <DigPoint |     EEG #1 : (-28.2, 102.3, 31.7) mm   : head frame>, <DigPoint |     EEG #3 : (28.6, 103.2, 31.6) mm    : head frame>, <DigPoint |    EEG #16 : (-67.4, 62.3, 30.5) mm    : head frame>, <DigPoint |    EEG #18 : (-48.2, 76.3, 80.9) mm    : head frame>, <DigPoint |    EEG #20 : (0.3, 83.3, 103.8) mm     : head frame>, <DigPoint |    EEG #22 : (49.7, 77.4, 79.6) mm     : head frame>, <DigPoint |    EEG #24 : (70.0, 64.2, 29.8) mm     : head frame>, <DigPoint |    EEG #40 : (-62.7, 16.1, 106.8) mm   : head frame>, <DigPoint |    EEG #42 : (0.4, 21.0, 140.9) mm     : head frame>, <DigPoint |    EEG #44 : (64.3, 16.7, 106.0) mm    : head frame>, <DigPoint |    EEG #62 : (-50.8, -48.7, 103.6) mm  : head frame>, <DigPoint |    EEG #64 : (0.3, -49.0, 129.2) mm    : head frame>, <DigPoint |    EEG #66 : (53.3, -48.5, 104.2) mm   : head frame>, <DigPoint |    EEG #81 : (-28.2, -84.3, 61.0) mm   : head frame>, <DigPoint |    EEG #83 : (28.6, -84.0, 60.9) mm    : head frame>, <DigPoint |    EEG #87 : (-80.7, 6.6, 36.6) mm     : head frame>, <DigPoint |    EEG #88 : (-69.4, -47.8, 47.3) mm   : head frame>, <DigPoint |    EEG #89 : (81.5, 7.5, 36.5) mm      : head frame>, <DigPoint |    EEG #90 : (70.0, -47.4, 47.3) mm    : head frame>]
[ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18]
['fp1', 'fp2', 'f3', 'f4', 'c3', 'c4', 'p3', 'p4', 'o1', 'o2', 'f7', 'f8', 't3', 't4', 't5', 't6', 'fz', 'cz', 'pz']
Setting up high-pass filter at 1 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal highpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Filter length: 845 samples (3.301 sec)

<ipython-input-15-246072ead167>:34: DeprecationWarning: Using ``raise_if_subset`` to ``set_montage``  is deprecated and ``set_dig`` will be  removed in 0.21
  raw.set_montage(montage, match_case=False, raise_if_subset=False)
Setting up high-pass filter at 1 Hz

FIR filter parameters
---------------------
Designing a one-pass, zero-phase, non-causal highpass filter:
- Windowed time-domain design (firwin) method
- Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
- Lower passband edge: 1.00
- Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
- Filter length: 845 samples (3.301 sec)
@yjmantilla
Copy link
Collaborator

yjmantilla commented Jun 6, 2020

Thats weird. Im taking a look at it.

  • Does it show any errors?
  • How much time have you let it run without apparently doing anything?

Its weird because 14 minutes of 19 channels at 256Hz shouldnt be that much of a computational stress. Specifically at that point it should be doing the robust referencing. That process can take a lot of time because of the RANSAC criteria.

In my case I've got 5 minutes of 58 channels at 1000Hz . That does take a long time to run for a single EEG (between 30minutes to 1 hour). Now in your case it shouldn't take that long. Let it run for maximum an 1 hour. If it does not finish a single eeg recording then something weird is going on. It my estimates yours shouldn't take more than 15 minutes.

Do notice that even though there is no output in the console, that does not mean it isn't doing anything. Maybe we should put in the code more a kind of percentage output that feedbacks to the user the stage of the process that is being done.

@barlehmann
Copy link
Author

barlehmann commented Jun 6, 2020 via email

@yjmantilla
Copy link
Collaborator

@barlehmann could you post the code you are using, or at least the parameters you are using to call pyprep?

@yjmantilla
Copy link
Collaborator

yjmantilla commented Jun 7, 2020

@sappelhoff Im suspecting the problem is actually the multitaper filter. It does take too long. I have checked and those two filters outputs are from the noisy detector and the removeTrend functions at the start of the prep.fit function. So my guess is that it is getting stuck (it does end at some point but after a long long time) at the multitaper.

I think we will need to include a way to skip that filter. In my personal implementation of pyprep if prep_params["line_freqs"] is [ ] then the multitaper filter is skipped. Currently doing that in the master version will just show an error. Maybe we could also include the option to do a normal notch filter (even though it beats the idea of filter-agnostic treatment of the original prep.

@barlehmann If the problem is indeed what Im saying then there is an easy way to solve this if you don't mind using a normal notch filter instead of the multitaper method the original prep does.

@sappelhoff
Copy link
Owner

Im suspecting the problem is actually the multitaper filter.

indeed .. we've had several points now that showed that not all is well with that filter. +1 for a PR to include a way to skip this filter.

... that'd provide a short term workaround. In the longterm, we need to prepare a proper (working) implementation of that filter ... or fix it in MNE.

@yjmantilla
Copy link
Collaborator

yjmantilla commented Jun 7, 2020

@sappelhoff Did the pull request with just the skipping.

As for now the filtering of the lines freqs when the multitaper is skipped is left to the user and should be done before calling pyprep in that case.

For the proper solution I know larsoner proposed something on #18 but I haven't checked it yet.

I also finally learnt the deal with the pre-commit, is pretty sweet :) . Was not trivial to make it work on windows, anaconda and gitbash but I managed.

For reference on that:

Had to add to my .bash_profile file on my user home the following line:
. "C:\Users\user\Anaconda3\etc\profile.d\conda.sh"

With that git bash for windows was able to detect anaconda python and run the pre-commit stuff. (one should activate the anaconda environment in the bash before doing the commit)

@sappelhoff
Copy link
Owner

Cool, glad to hear you managed to install it ... I didn't expect it to be so tricky on Windows 🤔

If you want to, feel free to make an entry about the steps in the Wiki using any format you like. --> That way we can point new contributors to that page instead of having to explain the process again and again.

@barlehmann
Copy link
Author

barlehmann commented Jun 7, 2020 via email

@barlehmann
Copy link
Author

barlehmann commented Jun 7, 2020 via email

@SebastianSpeer
Copy link

Hi,

I've run into a similar problem. However, I've got quite a large dataset: ~40 min, 64 channels, at 512Hz. I was suspecting that memory issues might be a problem. Is there a way to run certain processes in parallel or otherwise reduce the memory demands without downsampling? Or might this be the same filtering issue discussed above?

@sappelhoff
Copy link
Owner

yes, it may very well be related to the filter. Can you try to skip it by using @yjmantilla's patch in #29 ?

You'd have to download the development version of pyprep and do the changes in your local code ... at least until we finalized the patch and merged it

@yjmantilla
Copy link
Collaborator

@SebastianSpeer The issue can arise from either the multitaper or the current version of the RANSAC. Both will consume a lot of memory. Seeing the size of your data is not strange for memory issues to arise (40 minutes of 64 channels at 512Hz is a lot of data).

Currently for the multitaper filters I dont know how to run them in parallel since I dont know internally how mne does them. I would say it is possible since one could filter each channel separately. Like run the filter in batches of N channels.

Now, regarding pyprep I did manage to lower the memory requirements of the RANSAC in #24 but I have not been able to finish that pull request because of time issues. The patch already works but is untested. You could look into that. It needs a bit of a workaround since that branch was done before of 0.3 pyprep but fundamentally it just changes two functions of find_noisy_channels.py : run_ransac and find_bad_by_ransac

@sappelhoff What wiki are you referring to? The CONTRIBUTING.md? I used your link but it just redirects to the root of the project

@barlehmann Yes, the multitaper is the default of pyprep. The notch filter function is this one: https://mne.tools/stable/generated/mne.filter.notch_filter.html?highlight=notch%20filter

For examples of use check the power line section of https://mne.tools/stable/auto_tutorials/preprocessing/plot_30_filtering_resampling.html#power-line-noise

In the example the filter should be applied to the object entering the prep pipeline so you would apply it in your raw_copy object, something like this:
raw_copy.notch_filter(freqs=freqs)

Now your frequencies are from your power line, usually either 60Hz or 50Hz and their harmonics so bear in mind that. So:
freqs=(60,120,180,240)
Or
freqs=(50,100,150,200)

So you would need to do this before applying pyprep pipeline assuming you applied the patch to skip the multitaper filter.

@sappelhoff
Copy link
Owner

What wiki are you referring to? The CONTRIBUTING.md? I used your link but it just redirects to the root of the project

sorry, there was a setting that prevented contributors from contributing 😆 --> should work now: https://github.com/sappelhoff/pyprep/wiki

@SebastianSpeer
Copy link

@yjmantilla I've already disabled ransac in the prep pipeline and decided to run it on the epoched data to reduce memory demands. (Not sure if this is a good idea?) So I suspect this would be the issue. Do I understand correctly that to fix the issue only the notch filter would need to be removed from the pipeline?

Would it be possible to also reduce memory demands by loading the raw data using memmapping for example with preload='./tempfile' in
read_raw_fif https://mne.tools/stable/generated/mne.io.read_raw_fif.html?

@yjmantilla
Copy link
Collaborator

@SebastianSpeer As far as i know removing the notch will correct the memory issue given that you also dont have the ransac.

As for the second option I cannot comment because I have never done that before. I think it could be possible but one would need to see exactly how the data is needed on the multitataper when it is running.

@barlehmann Could you solve the problem?

@barlehmann
Copy link
Author

@yjmantilla thank you for asking. I have Not been able to figure it out yet - any other suggestions would be highly appreciated. I tried using the code/patch you suggested:

freqs = (60, 120)
raw_copy.notch_filter(freqs=freqs,)

And perhaps I do get slightly further than before, but I still get the same timing-out type problem that was happening before. Below are the results (though the compiler never stops running as was mentioned before):

mne: 0.20.7
<Info | 7 non-empty values
bads: []
ch_names: fp1, fp2, f3, f4, c3, c4, p3, p4, o1, o2, f7, f8, t3, t4, t5, ...
chs: 19 EEG
custom_ref_applied: False
highpass: 0.0 Hz
lowpass: 128.0 Hz
meas_date: unspecified
nchan: 19
projs: []
sfreq: 256.0 Hz

[<DigPoint | LPA : (-82.5, -0.0, 0.0) mm : head frame>, <DigPoint | Nasion : (0.0, 102.7, 0.0) mm : head frame>, <DigPoint | RPA : (82.2, 0.0, 0.0) mm : head frame>, <DigPoint | EEG #1 : (-28.2, 102.3, 31.7) mm : head frame>, <DigPoint | EEG #3 : (28.6, 103.2, 31.6) mm : head frame>, <DigPoint | EEG #16 : (-67.4, 62.3, 30.5) mm : head frame>, <DigPoint | EEG #18 : (-48.2, 76.3, 80.9) mm : head frame>, <DigPoint | EEG #20 : (0.3, 83.3, 103.8) mm : head frame>, <DigPoint | EEG #22 : (49.7, 77.4, 79.6) mm : head frame>, <DigPoint | EEG #24 : (70.0, 64.2, 29.8) mm : head frame>, <DigPoint | EEG #40 : (-62.7, 16.1, 106.8) mm : head frame>, <DigPoint | EEG #42 : (0.4, 21.0, 140.9) mm : head frame>, <DigPoint | EEG #44 : (64.3, 16.7, 106.0) mm : head frame>, <DigPoint | EEG #62 : (-50.8, -48.7, 103.6) mm : head frame>, <DigPoint | EEG #64 : (0.3, -49.0, 129.2) mm : head frame>, <DigPoint | EEG #66 : (53.3, -48.5, 104.2) mm : head frame>, <DigPoint | EEG #81 : (-28.2, -84.3, 61.0) mm : head frame>, <DigPoint | EEG #83 : (28.6, -84.0, 60.9) mm : head frame>, <DigPoint | EEG #87 : (-80.7, 6.6, 36.6) mm : head frame>, <DigPoint | EEG #88 : (-69.4, -47.8, 47.3) mm : head frame>, <DigPoint | EEG #89 : (81.5, 7.5, 36.5) mm : head frame>, <DigPoint | EEG #90 : (70.0, -47.4, 47.3) mm : head frame>]
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18]
['fp1', 'fp2', 'f3', 'f4', 'c3', 'c4', 'p3', 'p4', 'o1', 'o2', 'f7', 'f8', 't3', 't4', 't5', 't6', 'fz', 'cz', 'pz']
Setting up band-stop filter

FIR filter parameters

Designing a one-pass, zero-phase, non-causal bandstop filter:

  • Windowed time-domain design (firwin) method
  • Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
  • Lower transition bandwidth: 0.50 Hz
  • Upper transition bandwidth: 0.50 Hz
  • Filter length: 1691 samples (6.605 sec)

:25: DeprecationWarning: Using raise_if_subset to set_montage is deprecated and set_dig will be removed in 0.21
raw.set_montage(montage, match_case=False, raise_if_subset=False)
Setting up high-pass filter at 1 Hz

FIR filter parameters

Designing a one-pass, zero-phase, non-causal highpass filter:

  • Windowed time-domain design (firwin) method
  • Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
  • Lower passband edge: 1.00
  • Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
  • Filter length: 845 samples (3.301 sec)

Setting up high-pass filter at 1 Hz

FIR filter parameters

Designing a one-pass, zero-phase, non-causal highpass filter:

  • Windowed time-domain design (firwin) method
  • Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation
  • Lower passband edge: 1.00
  • Lower transition bandwidth: 1.00 Hz (-6 dB cutoff frequency: 0.50 Hz)
  • Filter length: 845 samples (3.301 sec)

@yjmantilla
Copy link
Collaborator

yjmantilla commented Jun 16, 2020

@barlehmann could you post the exact complete code you are using ? From what I can see it seems it may be possible that you are running a normal notch filter outside prep and then in the prep the multitaper is still executing. If it isnt that then it may be getting stuck at the ransac.

In any case on saturday I will have more free time if you want to do a google meet or something to solve this.

@yjmantilla
Copy link
Collaborator

@barlehmann Indeed, just checked the code and the multitaper is probably still running.

You correctly use a notch filter before the prep but then the mistake is here:

prep_params = {'ref_chs': ch_names_eeg,
               'reref_chs': ch_names_eeg,
               'line_freqs': np.arange(60, sample_rate/2, 60)}

put line_freqs as an empty list , that is:

prep_params = {'ref_chs': ch_names_eeg,
               'reref_chs': ch_names_eeg,
               'line_freqs': []}

Assuming you have the patch I did then it should skip the filter an enter the robust referencing stage of the prep. If you dont have the patch it will probably throw an error.

@sappelhoff
Copy link
Owner

Assuming you have the patch

The patch has been merged, so it can be obtained by just installing the "development" version of pyprep as per the instructions

@barlehmann
Copy link
Author

@yjmantilla Thank you so much, I actually got it to work right before you sent your last message - I really appreciate your help through this though. To give you more specific information on what changed for it to work: I had been having trouble with MNE reading the channel names from my .edf file, so I had to specifically draw this out in my code:

ch_names = ['fp1', 'fp2', 'f3', 'f4', 'c3', 'c4', 'p3', 'p4', 'o1', 'o2',
'f7', 'f8', 't3', 't4', 't5', 't6', 'fz', 'cz', 'pz']
ch_types = ['eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg',
'eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg']
#ch_names = ['FP1', 'FP2', 'F3', 'F4', 'C3', 'C4', 'P3', 'P4', 'O1', 'O2',
'F7', 'F8', 'T3', 'T4', 'T5', 'T6', 'FZ', 'CZ', 'PZ', 'A2-A1']
info = mne.create_info(ch_names=ch_names, sfreq=256, ch_types=ch_types)
raw.info = info

However, this coding was, in some way (I am not aware exactly how), causing or contributing to the non-finishing pyprep. When I replaced the above code that ranames my channels with the following line of code:

raw.rename_channels(lambda s: s.split(' ')[1].split('-')[0])

The problem disappeared!

Also thank you @sappelhoff for your information on the patch. I greatly appreciate your support through this.

Lastly, I understand that PyPrep does some pre-processing, but does not deal with eye blink artifacts, or other user-generated muscle or motion artifacts. I just wanted to ask you guys if there is any additional automated pipeline with MNE that you recommend for removing such user-generated artifacts and/or performing ICA automatically?

@sappelhoff
Copy link
Owner

Lastly, I understand that PyPrep does some pre-processing, but does not deal with eye blink artifacts, or other user-generated muscle or motion artifacts. I just wanted to ask you guys if there is any additional automated pipeline with MNE that you recommend for removing such user-generated artifacts and/or performing ICA automatically?

this tutorial is quite nice for using ICA to repair sterotypical artifacts:

https://mne.tools/stable/auto_tutorials/preprocessing/plot_40_artifact_correction_ica.html#sphx-glr-auto-tutorials-preprocessing-plot-40-artifact-correction-ica-py

@barlehmann
Copy link
Author

@sappelhoff thank you, I appreciate this suggestion, I have been looking at that tutorial, in fact, a few weeks ago and was hoping some the de-artifacting might be approached in a more automated way (I do not have experience running ICA myself before). For example, one automated way to do this is - FASTER: Fully Automated Statistical Thresholding for EEG artifact Rejection is a way of automating the process of de-artifacting.
https://www.sciencedirect.com/science/article/pii/S0165027010003894
I wonder if you would caution against using automated methods such as faster. Also, I am fairly new to the world of EEG pre-processing and any suggestions on the validity of such automated methods as FASTER would be greatly appreciated.

@sappelhoff
Copy link
Owner

I do not have experience running ICA myself before

I guess you gotta start at some point! :-)

For example, one automated way to do this is - FASTER: Fully Automated Statistical Thresholding for EEG artifact Rejection is a way of automating the process of de-artifacting.

yes, that's a nice paper I think. @wmvanvliet has implemented this in Python a few years ago --> https://gist.github.com/wmvanvliet/d883c3fe1402c7ced6fc it'll probably have to be adjusted for newer MNE versions.

I wonder if you would caution against using automated methods such as faster

I think automatic methods are very nice, because they provide fully reproducible results --> at the same time, these methods may produce garbage (fully reproducible garbage 🤷‍♂️ ), so it's always important to:

  1. inspect your data
  2. select an automatic method (and the method's parameters) such that it makes sense for your data
  3. inspect the outputs

and not blindly feed data through a bunch of methods and hope for the best. Other than this cautionary note there is nothing against automatic methods. Yet there is also an advantage to go through each preprocessing step manually, and work with your data intimately ... because that will in turn give you a better feeling of what the automatic methods do and how to interpret their results.

@wmvanvliet
Copy link

If you don't have much experience with some specific preprocessing step (e.g. ICA) yet, I strongly recommend doing it manually for the first few times to get a feel for it. The output of automated methods needs to be checked, always, and if you don't know what you're looking for, you're in trouble.

@barlehmann
Copy link
Author

@sappelhoff and @wmvanvliet thank you both very much for your helpful feedback on the data cleaning process, and the importance of corroborating with your own ICA - I will definitely need to get started on learning this. Great to know also about the Faster implementation on python as well!

@barlehmann
Copy link
Author

barlehmann commented Jun 25, 2020

I have been able to use PyPrep for the file I had been working on, I recently tried using a different file that is a bit noisier in the pyprep and was getting the timing-out issue that was happening before (even with the notch filter already in place outside of PyPREP). When I made the changes @yjmantilla suggested:
prep_params = {'ref_chs': ch_names_eeg,
'reref_chs': ch_names_eeg,
'line_freqs': []}
This helped to allow for PyPrep to finish running. However I get the following error:
OSError: Too few channels available to reliably perform ransac. Perhaps, too many channels have failed quality tests.

When viewing the file after the notch filter, it is quite readable, and though noisier, I am surprised to get this kind of error message. I also created a bandpass filter betwee 1-50Hz in case the bandpass parameters were too wide to begin with but that did not change the results either.

Below is the google drive link to the file I am attempting to run PyPrep on

https://drive.google.com/file/d/1lp7X9fI_IPrc_VVT_Pxa4s04bs_Xw6gk/view?usp=sharing

Any thoughts would be appreciated.

@yjmantilla
Copy link
Collaborator

@barlehmann Yeah indeed Im seeing the problem. The ransac is saying that too many channels were marked as bad channels so it cannot continue to interpolate (which he needs to do to function). In particular this will happen if 25% of the number of good channels so far is less or equal to 3.

Now you could disable the ransac for now. For that you would need to set the following in your code:

prep = PrepPipeline(raw_copy, prep_params, montage,ransac=False)

I think @sappelhoff may have some better feedback regarding a workaround since he directly translated the ransac.

@sappelhoff
Copy link
Owner

I think @sappelhoff may have some better feedback regarding a workaround since he directly translated the ransac.

No better feedback, switching off RANSAC is a solution until we have invested more work into this software.

... as it says in the description, this is still ALPHA stage --> so there is a lot to be done to make this code better :)

unfortunately we're all a little short on time, doing our PhDs next to this (and other) project(s).

@barlehmann
Copy link
Author

barlehmann commented Jun 26, 2020

@yjmantilla thank you so much for the assistance with this. That line of code works that sets prep to not use ransac works perfect - using this code is much better than nothing. Your help also makes me understand that the recordings we use will need to be of a higher quality. Also thank you @sappelhoff for clarifying this issue as well, I totally understand and greatly value both of your assistance on using this - It's awesome that you have adapted PyPrep for python even if only at an alpha version so far - I am very glad I found out about this. If there are other free and similar EEG pre-processing-related pipelines for python (apart from this and FASTER) that you recommend as well, I will of course be glad to hear of anything that might be of use.

@yjmantilla
Copy link
Collaborator

@sappelhoff @barlehmann I think we are ready to close this, right?

@sappelhoff
Copy link
Owner

👍 it will also be closed automatically with #32

@christian-oreilly
Copy link
Contributor

Using mne-python PR #7609 and changing the mne.filter.notch_filter(...) call in PrepPipeline.fit() to:

self.EEG_clean = mne.filter.notch_filter(
    self.EEG_new,
    Fs=self.sfreq,
    freqs=linenoise,
    method="spectrum_fit",
    mt_bandwidth=2,
    p_value=0.01,
    filter_length='10s'
)

seems to work fine for me. Without it, the fit() call hangs forever because scipy.signal.windows.dpss() is called with full length signals (no epoching), which takes forever.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants