-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resample function optimizations #2035
Comments
the culprit is the resample method from scipy.signal that honestly sucks...
it does a dummy fft over the entire signal length... the problem has to be
fixed in scipy.
you might want to try: http://scikits.appspot.com/samplerate
|
In my experince CUDA is about an order of magnitude faster. I'm pretty sure that was both for FIR filtering and resampling. What would really help the CPU case is using |
Yeah, the scipy signal processing is the one thing that makes me miss matlab (just a little bit). I've tried creating functions like |
+1 for the string idea, we already do something similar for filter lengths
|
Did you already begin an upfirdn contribution to scipy? That may be outside of my signal processing chops :/ |
No, and it's going to take some time to get right. So for now the padding to power of 2 should help. |
Someone actually has a SWIG'ed version available they gave me permission to relicense as BSD for scipy, but I'm not sure how complex that problem is going to be. |
Ah ok - for the resample string thing, do you know if there is a function Actually another thought might be to allow someone to specify the total On Wed, Apr 29, 2015 at 7:00 AM, Eric Larson notifications@github.com
|
No, I don't know of one. If memory is an issue, one thing that might fix that is iterating over channels instead of operating on them as a contiguous block (e.g., as |
Ya that is true - if you do it for one channel at a time it's not a big all_res_chans = []
for chan in channels:
chan = pad_to_power_2(chan)
chan_res = resample(chan)
all_res_chans.append(remove_padding(chan_res))
make_back_into_Raw(all_res_chans) or something like this On Wed, Apr 29, 2015 at 9:55 AM, Eric Larson notifications@github.com
|
Yeah, that's the pseudocode anyway. Getting it to work with the current resampling might take some work, but should be doable, at least for the You're welcome to take a stab at it if you have time. I think if |
mm that's a good idea. I will try to include this if I finish my stuff for On Wed, Apr 29, 2015 at 10:05 AM, Eric Larson notifications@github.com
|
Hopefully I'll have time to actually make this PR work and get it into |
An update -- this should be made ~2x faster in the near term (months) by scipy/scipy#5592 and the related scipy/scipy#5610. By my estimate, using |
That's great - looking forward to not dreading my resampling steps :) |
FYI scipy/scipy#5610 ( |
wohoo! wheels are a-turning :) On Wed, Jan 20, 2016 at 10:43 AM, Eric Larson notifications@github.com
|
Indeed, v. cool. |
Currently it takes me quite a long time to carry out a resample function. I've got about 120 channels with about 10 minutes of data, sampled at 5Khz, which equals about 120 x 3,000,000 data points. Obviously my first step in this process is to resample the data such that it's not so densely-sampled. However, this can take a really long time and use a lot of memory.
I have two questions regarding this:
Anyway just trying to see if anyone has thoughts on improving the speed of these functions. If more people use MNE that do ECoG research (and thus have little control over the recording parameters), it may prove useful.
The text was updated successfully, but these errors were encountered: