-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to finish running PyPREP #28
Comments
Thats weird. Im taking a look at it.
Its weird because 14 minutes of 19 channels at 256Hz shouldnt be that much of a computational stress. Specifically at that point it should be doing the robust referencing. That process can take a lot of time because of the RANSAC criteria. In my case I've got 5 minutes of 58 channels at 1000Hz . That does take a long time to run for a single EEG (between 30minutes to 1 hour). Now in your case it shouldn't take that long. Let it run for maximum an 1 hour. If it does not finish a single eeg recording then something weird is going on. It my estimates yours shouldn't take more than 15 minutes. Do notice that even though there is no output in the console, that does not mean it isn't doing anything. Maybe we should put in the code more a kind of percentage output that feedbacks to the user the stage of the process that is being done. |
It does not show any errors, and I have let it run over 10 minutes without
any result, which I agree is very unlikely that this dataset would take
this long. Thank you for your response and attention to this, I would love
to be able to use PyPREP for my work, and hope that a solution can be found
to this.
Thank you again and I'll wait to hear if you have any other thoughts /
suggestions,
Bar
…On Fri, Jun 5, 2020 at 9:20 PM Yorguin José Mantilla Ramos < ***@***.***> wrote:
Thats weird. Im taking a look at it.
- Does it show any errors?
- How much time have you let it run without apparently doing anything?
Its weird because 14 minutes of 19 channels at 256Hz shouldnt be that much
of a computational stress. Specifically at that point it should be doing
the robust referencing. That process can take a lot of time because of the
RANSAC criteria.
In my case I've got 5 minutes of 58 channels at 1000Hz . That does take a
long time to run for a single EEG (between 30minutes to 1 hour). Now in
your case it shouldn't take that long. Let it run for maximum an 1 hour. If
it does not finish a single eeg recording then something weird is going on.
It my estimates yours shouldn't take more than 15 minutes.
Do notice that even though there is no output in the console does not mean
it is doing nothing. Maybe we should put in the code more a kind of
percentage output that feedbacks to the user the stage of the process that
is being done.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#28 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOPSSYO4U7GRJRTUOKMIXIDRVGKT7ANCNFSM4NVGNVPA>
.
--
Bar Lehmann, LICSW, LCSW, BCN
Pronouns (he, him)
Better State of Mind
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.better-2Dstate-2Dof-2Dmind.com_&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=QlhKoajey2QgZZovEHi_mTU9JFQ8pz4mKclopcug2fs&e=>
Find Us On Psychology Today
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.psychologytoday.com_us_therapists_better-2Dstate-2Dof-2Dmind-2Dwashington-2Ddc_492202-3Fp-3D1&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=REG0Ftd30CRbaVXB_m3pzAIqebNFFfiY-_IsjUkpHqc&e=>
(833) 944-0692
|
@barlehmann could you post the code you are using, or at least the parameters you are using to call pyprep? |
@sappelhoff Im suspecting the problem is actually the multitaper filter. It does take too long. I have checked and those two filters outputs are from the noisy detector and the removeTrend functions at the start of the prep.fit function. So my guess is that it is getting stuck (it does end at some point but after a long long time) at the multitaper. I think we will need to include a way to skip that filter. In my personal implementation of pyprep if prep_params["line_freqs"] is [ ] then the multitaper filter is skipped. Currently doing that in the master version will just show an error. Maybe we could also include the option to do a normal notch filter (even though it beats the idea of filter-agnostic treatment of the original prep. @barlehmann If the problem is indeed what Im saying then there is an easy way to solve this if you don't mind using a normal notch filter instead of the multitaper method the original prep does. |
indeed .. we've had several points now that showed that not all is well with that filter. +1 for a PR to include a way to skip this filter. ... that'd provide a short term workaround. In the longterm, we need to prepare a proper (working) implementation of that filter ... or fix it in MNE. |
@sappelhoff Did the pull request with just the skipping. As for now the filtering of the lines freqs when the multitaper is skipped is left to the user and should be done before calling pyprep in that case. For the proper solution I know larsoner proposed something on #18 but I haven't checked it yet. I also finally learnt the deal with the pre-commit, is pretty sweet :) . Was not trivial to make it work on windows, anaconda and gitbash but I managed. For reference on that: Had to add to my .bash_profile file on my user home the following line: With that git bash for windows was able to detect anaconda python and run the pre-commit stuff. (one should activate the anaconda environment in the bash before doing the commit) |
Cool, glad to hear you managed to install it ... I didn't expect it to be so tricky on Windows 🤔 If you want to, feel free to make an entry about the steps in the Wiki using any format you like. --> That way we can point new contributors to that page instead of having to explain the process again and again. |
Yorguin, I am glad to try the notch filter instead of the multitaper - I
actually do not have any code specifying a multitaper filter but understand
that this multitaper is just the default? Can you tell me the code to
specify the notch instead of the multi-taper?
Thank you very much and waiting to hear,
Bar
…On Sun, Jun 7, 2020 at 8:14 AM Stefan Appelhoff ***@***.***> wrote:
Cool, glad to hear you managed to install it ... I didn't expect it to be
so tricky on Windows 🤔
If you want to, feel free to make an entry about the steps in the Wiki
<https://github.com/sappelhoff/pyprep/wiki> using any format you like.
--> That way we can point new contributors to that page instead of having
to explain the process again and again.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#28 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOPSSYMOH5YHN5PEZRVBOGLRVOADDANCNFSM4NVGNVPA>
.
--
Bar Lehmann, LICSW, LCSW, BCN
Pronouns (he, him)
Better State of Mind
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.better-2Dstate-2Dof-2Dmind.com_&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=QlhKoajey2QgZZovEHi_mTU9JFQ8pz4mKclopcug2fs&e=>
Find Us On Psychology Today
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.psychologytoday.com_us_therapists_better-2Dstate-2Dof-2Dmind-2Dwashington-2Ddc_492202-3Fp-3D1&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=REG0Ftd30CRbaVXB_m3pzAIqebNFFfiY-_IsjUkpHqc&e=>
(833) 944-0692
|
Yorguin,
In regard to your previous email: the code I am using is below.
!pip3 install git+https://github.com/sappelhoff/pyprep.git
!pip3 install pyprep
import os
import pathlib
import mne
import numpy as np
import scipy.io as sio
import matplotlib.pyplot as plt
from pyprep.prep_pipeline import PrepPipeline
raw = mne.io.read_raw_edf(raw_5meo2, preload=True)
print(raw.info)
print(raw.info['dig'])
ch_names = ['fp1', 'fp2', 'f3', 'f4', 'c3', 'c4', 'p3', 'p4', 'o1', 'o2',
'f7', 'f8', 't3', 't4', 't5', 't6', 'fz', 'cz', 'pz']
ch_types = ['eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg',
'eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg','eeg']
#ch_names = ['FP1', 'FP2', 'F3', 'F4', 'C3', 'C4', 'P3', 'P4', 'O1', 'O2',
'F7', 'F8', 'T3', 'T4', 'T5', 'T6', 'FZ', 'CZ', 'PZ', 'A2-A1']
info = mne.create_info(ch_names=ch_names, sfreq=256, ch_types=ch_types)
raw.info = info
print(raw.info)
# The eegbci data has non-standard channel names. We need to rename them:
mne.datasets.eegbci.standardize(raw)
montage_kind = "standard_1020"
montage = mne.channels.make_standard_montage(montage_kind)
#raw.set_montage(montage, match_case=False)
raw.set_montage(montage, match_case=False, raise_if_subset=False)
#in version 20.5 (stable) instead of: on_missing='ignore' it is
raise_if_subset=False
print(raw.info['dig'])
eeg_index = mne.pick_types(raw.info, eeg=True, eog=False, meg=False)
print(eeg_index)
ch_names_eeg = list(np.asarray(ch_names)[eeg_index])
print(ch_names_eeg)
sample_rate = raw.info["sfreq"]
# Make a copy of the data
raw_copy = raw.copy()
###############################################################################
# Set PREP parameters and run PREP
# --------------------------------
#
# Notes: we keep all the default parameter settings as described in the PREP
# paper except one, the fraction of bad time windows
# (we change it from 0.01 to 0.1), because the EEG data is 60s long, which
# means it gots only 60 time windows. We think the algorithm will be too
# sensitive if using the default setting.
# Fit prep
prep_params = {'ref_chs': ch_names_eeg,
'reref_chs': ch_names_eeg,
'line_freqs': np.arange(60, sample_rate/2, 60)}
prep = PrepPipeline(raw_copy, prep_params, montage)
prep.fit()
#checking bad channels by PyPREP
print("Bad channels: {}".format(prep.interpolated_channels))
print("Bad channels original: {}".format(prep.noisy_channels_original[
"bad_all"]))
print("Bad channels after interpolation: {}".format(
prep.still_noisy_channels))
…On Sun, Jun 7, 2020 at 11:37 AM Bar Lehmann ***@***.***> wrote:
Yorguin, I am glad to try the notch filter instead of the multitaper - I
actually do not have any code specifying a multitaper filter but understand
that this multitaper is just the default? Can you tell me the code to
specify the notch instead of the multi-taper?
Thank you very much and waiting to hear,
Bar
On Sun, Jun 7, 2020 at 8:14 AM Stefan Appelhoff ***@***.***>
wrote:
> Cool, glad to hear you managed to install it ... I didn't expect it to be
> so tricky on Windows 🤔
>
> If you want to, feel free to make an entry about the steps in the Wiki
> <https://github.com/sappelhoff/pyprep/wiki> using any format you like.
> --> That way we can point new contributors to that page instead of having
> to explain the process again and again.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#28 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AOPSSYMOH5YHN5PEZRVBOGLRVOADDANCNFSM4NVGNVPA>
> .
>
--
Bar Lehmann, LICSW, LCSW, BCN
Pronouns (he, him)
Better State of Mind
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.better-2Dstate-2Dof-2Dmind.com_&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=QlhKoajey2QgZZovEHi_mTU9JFQ8pz4mKclopcug2fs&e=>
Find Us On Psychology Today
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.psychologytoday.com_us_therapists_better-2Dstate-2Dof-2Dmind-2Dwashington-2Ddc_492202-3Fp-3D1&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=REG0Ftd30CRbaVXB_m3pzAIqebNFFfiY-_IsjUkpHqc&e=>
(833) 944-0692
--
Bar Lehmann, LICSW, LCSW, BCN
Pronouns (he, him)
Better State of Mind
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.better-2Dstate-2Dof-2Dmind.com_&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=QlhKoajey2QgZZovEHi_mTU9JFQ8pz4mKclopcug2fs&e=>
Find Us On Psychology Today
<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.psychologytoday.com_us_therapists_better-2Dstate-2Dof-2Dmind-2Dwashington-2Ddc_492202-3Fp-3D1&d=DwMFaQ&c=U0G0XJAMhEk_X0GAGzCL7Q&r=Cj9RPhU2oMWLltcCQOdyiRyEy1SYAQSCFx-DVb42LRg&m=7ee_WmUyiOPjHobDC0vMixQZZFyB0J_OXL0Iat3v7j4&s=REG0Ftd30CRbaVXB_m3pzAIqebNFFfiY-_IsjUkpHqc&e=>
(833) 944-0692
|
Hi, I've run into a similar problem. However, I've got quite a large dataset: ~40 min, 64 channels, at 512Hz. I was suspecting that memory issues might be a problem. Is there a way to run certain processes in parallel or otherwise reduce the memory demands without downsampling? Or might this be the same filtering issue discussed above? |
yes, it may very well be related to the filter. Can you try to skip it by using @yjmantilla's patch in #29 ? You'd have to download the development version of pyprep and do the changes in your local code ... at least until we finalized the patch and merged it |
@SebastianSpeer The issue can arise from either the multitaper or the current version of the RANSAC. Both will consume a lot of memory. Seeing the size of your data is not strange for memory issues to arise (40 minutes of 64 channels at 512Hz is a lot of data). Currently for the multitaper filters I dont know how to run them in parallel since I dont know internally how mne does them. I would say it is possible since one could filter each channel separately. Like run the filter in batches of N channels. Now, regarding pyprep I did manage to lower the memory requirements of the RANSAC in #24 but I have not been able to finish that pull request because of time issues. The patch already works but is untested. You could look into that. It needs a bit of a workaround since that branch was done before of 0.3 pyprep but fundamentally it just changes two functions of find_noisy_channels.py : run_ransac and find_bad_by_ransac @sappelhoff What wiki are you referring to? The CONTRIBUTING.md? I used your link but it just redirects to the root of the project @barlehmann Yes, the multitaper is the default of pyprep. The notch filter function is this one: https://mne.tools/stable/generated/mne.filter.notch_filter.html?highlight=notch%20filter For examples of use check the power line section of https://mne.tools/stable/auto_tutorials/preprocessing/plot_30_filtering_resampling.html#power-line-noise In the example the filter should be applied to the object entering the prep pipeline so you would apply it in your raw_copy object, something like this: Now your frequencies are from your power line, usually either 60Hz or 50Hz and their harmonics so bear in mind that. So: So you would need to do this before applying pyprep pipeline assuming you applied the patch to skip the multitaper filter. |
sorry, there was a setting that prevented contributors from contributing 😆 --> should work now: https://github.com/sappelhoff/pyprep/wiki |
@yjmantilla I've already disabled ransac in the prep pipeline and decided to run it on the epoched data to reduce memory demands. (Not sure if this is a good idea?) So I suspect this would be the issue. Do I understand correctly that to fix the issue only the notch filter would need to be removed from the pipeline? Would it be possible to also reduce memory demands by loading the raw data using memmapping for example with preload='./tempfile' in |
@SebastianSpeer As far as i know removing the notch will correct the memory issue given that you also dont have the ransac. As for the second option I cannot comment because I have never done that before. I think it could be possible but one would need to see exactly how the data is needed on the multitataper when it is running. @barlehmann Could you solve the problem? |
@yjmantilla thank you for asking. I have Not been able to figure it out yet - any other suggestions would be highly appreciated. I tried using the code/patch you suggested: freqs = (60, 120) And perhaps I do get slightly further than before, but I still get the same timing-out type problem that was happening before. Below are the results (though the compiler never stops running as was mentioned before): mne: 0.20.7 [<DigPoint | LPA : (-82.5, -0.0, 0.0) mm : head frame>, <DigPoint | Nasion : (0.0, 102.7, 0.0) mm : head frame>, <DigPoint | RPA : (82.2, 0.0, 0.0) mm : head frame>, <DigPoint | EEG #1 : (-28.2, 102.3, 31.7) mm : head frame>, <DigPoint | EEG #3 : (28.6, 103.2, 31.6) mm : head frame>, <DigPoint | EEG #16 : (-67.4, 62.3, 30.5) mm : head frame>, <DigPoint | EEG #18 : (-48.2, 76.3, 80.9) mm : head frame>, <DigPoint | EEG #20 : (0.3, 83.3, 103.8) mm : head frame>, <DigPoint | EEG #22 : (49.7, 77.4, 79.6) mm : head frame>, <DigPoint | EEG #24 : (70.0, 64.2, 29.8) mm : head frame>, <DigPoint | EEG #40 : (-62.7, 16.1, 106.8) mm : head frame>, <DigPoint | EEG #42 : (0.4, 21.0, 140.9) mm : head frame>, <DigPoint | EEG #44 : (64.3, 16.7, 106.0) mm : head frame>, <DigPoint | EEG #62 : (-50.8, -48.7, 103.6) mm : head frame>, <DigPoint | EEG #64 : (0.3, -49.0, 129.2) mm : head frame>, <DigPoint | EEG #66 : (53.3, -48.5, 104.2) mm : head frame>, <DigPoint | EEG #81 : (-28.2, -84.3, 61.0) mm : head frame>, <DigPoint | EEG #83 : (28.6, -84.0, 60.9) mm : head frame>, <DigPoint | EEG #87 : (-80.7, 6.6, 36.6) mm : head frame>, <DigPoint | EEG #88 : (-69.4, -47.8, 47.3) mm : head frame>, <DigPoint | EEG #89 : (81.5, 7.5, 36.5) mm : head frame>, <DigPoint | EEG #90 : (70.0, -47.4, 47.3) mm : head frame>] FIR filter parametersDesigning a one-pass, zero-phase, non-causal bandstop filter:
:25: DeprecationWarning: Using FIR filter parametersDesigning a one-pass, zero-phase, non-causal highpass filter:
Setting up high-pass filter at 1 Hz FIR filter parametersDesigning a one-pass, zero-phase, non-causal highpass filter:
|
@barlehmann could you post the exact complete code you are using ? From what I can see it seems it may be possible that you are running a normal notch filter outside prep and then in the prep the multitaper is still executing. If it isnt that then it may be getting stuck at the ransac. In any case on saturday I will have more free time if you want to do a google meet or something to solve this. |
@barlehmann Indeed, just checked the code and the multitaper is probably still running. You correctly use a notch filter before the prep but then the mistake is here:
put line_freqs as an empty list , that is:
Assuming you have the patch I did then it should skip the filter an enter the robust referencing stage of the prep. If you dont have the patch it will probably throw an error. |
The patch has been merged, so it can be obtained by just installing the "development" version of pyprep as per the instructions |
@yjmantilla Thank you so much, I actually got it to work right before you sent your last message - I really appreciate your help through this though. To give you more specific information on what changed for it to work: I had been having trouble with MNE reading the channel names from my .edf file, so I had to specifically draw this out in my code: ch_names = ['fp1', 'fp2', 'f3', 'f4', 'c3', 'c4', 'p3', 'p4', 'o1', 'o2', However, this coding was, in some way (I am not aware exactly how), causing or contributing to the non-finishing pyprep. When I replaced the above code that ranames my channels with the following line of code: raw.rename_channels(lambda s: s.split(' ')[1].split('-')[0]) The problem disappeared! Also thank you @sappelhoff for your information on the patch. I greatly appreciate your support through this. Lastly, I understand that PyPrep does some pre-processing, but does not deal with eye blink artifacts, or other user-generated muscle or motion artifacts. I just wanted to ask you guys if there is any additional automated pipeline with MNE that you recommend for removing such user-generated artifacts and/or performing ICA automatically? |
this tutorial is quite nice for using ICA to repair sterotypical artifacts: |
@sappelhoff thank you, I appreciate this suggestion, I have been looking at that tutorial, in fact, a few weeks ago and was hoping some the de-artifacting might be approached in a more automated way (I do not have experience running ICA myself before). For example, one automated way to do this is - FASTER: Fully Automated Statistical Thresholding for EEG artifact Rejection is a way of automating the process of de-artifacting. |
I guess you gotta start at some point! :-)
yes, that's a nice paper I think. @wmvanvliet has implemented this in Python a few years ago --> https://gist.github.com/wmvanvliet/d883c3fe1402c7ced6fc it'll probably have to be adjusted for newer MNE versions.
I think automatic methods are very nice, because they provide fully reproducible results --> at the same time, these methods may produce garbage (fully reproducible garbage 🤷♂️ ), so it's always important to:
and not blindly feed data through a bunch of methods and hope for the best. Other than this cautionary note there is nothing against automatic methods. Yet there is also an advantage to go through each preprocessing step manually, and work with your data intimately ... because that will in turn give you a better feeling of what the automatic methods do and how to interpret their results. |
If you don't have much experience with some specific preprocessing step (e.g. ICA) yet, I strongly recommend doing it manually for the first few times to get a feel for it. The output of automated methods needs to be checked, always, and if you don't know what you're looking for, you're in trouble. |
@sappelhoff and @wmvanvliet thank you both very much for your helpful feedback on the data cleaning process, and the importance of corroborating with your own ICA - I will definitely need to get started on learning this. Great to know also about the Faster implementation on python as well! |
I have been able to use PyPrep for the file I had been working on, I recently tried using a different file that is a bit noisier in the pyprep and was getting the timing-out issue that was happening before (even with the notch filter already in place outside of PyPREP). When I made the changes @yjmantilla suggested: When viewing the file after the notch filter, it is quite readable, and though noisier, I am surprised to get this kind of error message. I also created a bandpass filter betwee 1-50Hz in case the bandpass parameters were too wide to begin with but that did not change the results either. Below is the google drive link to the file I am attempting to run PyPrep on https://drive.google.com/file/d/1lp7X9fI_IPrc_VVT_Pxa4s04bs_Xw6gk/view?usp=sharing Any thoughts would be appreciated. |
@barlehmann Yeah indeed Im seeing the problem. The ransac is saying that too many channels were marked as bad channels so it cannot continue to interpolate (which he needs to do to function). In particular this will happen if 25% of the number of good channels so far is less or equal to 3. Now you could disable the ransac for now. For that you would need to set the following in your code:
I think @sappelhoff may have some better feedback regarding a workaround since he directly translated the ransac. |
No better feedback, switching off RANSAC is a solution until we have invested more work into this software. ... as it says in the description, this is still ALPHA stage --> so there is a lot to be done to make this code better :) unfortunately we're all a little short on time, doing our PhDs next to this (and other) project(s). |
@yjmantilla thank you so much for the assistance with this. That line of code works that sets prep to not use ransac works perfect - using this code is much better than nothing. Your help also makes me understand that the recordings we use will need to be of a higher quality. Also thank you @sappelhoff for clarifying this issue as well, I totally understand and greatly value both of your assistance on using this - It's awesome that you have adapted PyPrep for python even if only at an alpha version so far - I am very glad I found out about this. If there are other free and similar EEG pre-processing-related pipelines for python (apart from this and FASTER) that you recommend as well, I will of course be glad to hear of anything that might be of use. |
@sappelhoff @barlehmann I think we are ready to close this, right? |
👍 it will also be closed automatically with #32 |
Using mne-python PR #7609 and changing the self.EEG_clean = mne.filter.notch_filter(
self.EEG_new,
Fs=self.sfreq,
freqs=linenoise,
method="spectrum_fit",
mt_bandwidth=2,
p_value=0.01,
filter_length='10s'
) seems to work fine for me. Without it, the |
Whenever I try to run PyPREP I get the following output which seems promising, however it stalls here and continues to act as if it is running and no other output comes out. Any assistance would be greatly appreciated. (Also I tried running this on both the present master version v20, and also the development version and the same results occur in each one.)
EDIT by @sappelhoff : 2020-06-06 --> I added backticks ````` to format the following as a codeblock. Please see the guidelines here @barlehmann
The text was updated successfully, but these errors were encountered: