Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Common Average Rereferenceing as a Substiture for Low-Pass Filtering? #166

Closed
djsoper opened this issue Jan 2, 2020 · 4 comments · Fixed by #595
Closed

Common Average Rereferenceing as a Substiture for Low-Pass Filtering? #166

djsoper opened this issue Jan 2, 2020 · 4 comments · Fixed by #595

Comments

@djsoper
Copy link

djsoper commented Jan 2, 2020

No description provided.

@djsoper djsoper closed this as completed Jan 2, 2020
@djsoper djsoper reopened this Jan 2, 2020
@djsoper
Copy link
Author

djsoper commented Jan 2, 2020

I've been wondering for a while how Kilosort2 avoids the high frequency noise without explicitly using a Low-Pass filter at 7000hz or so. I just realized it substitutes a Low-Pass filter for a Common Average Re-reference. Hopefully you can clear this up for me, because I'm amazed that doesn't cause loss of units as they happen across channels or cause lots of noise to get propagated throughout the data. Considering this works in Neuropixel electrodes, which have a pretty good SNR, I'm guessing the effects on the data aren't as bad as I imagine them to be, but I'm still a bit lost. Can someone explain to me why using CAR on electrodes with contacts <60 microns apart wouldn't ruin the potential spiking activity seen across channels?

That being said, can I easily infiltrate the pipeline and replace the CAR with a low-pass so that I'm not worried about this or do you foresee that causing issues for your sorting algorithm?

Thanks,
Dan

@marius10p
Copy link
Contributor

In my experience, high-passing never really helped, unless you have some very high-frequency noise or something. This was also true of algorithms not based on template matching, because a common first step is to project the spikes onto a set of principal components, which are typically smooth. This means the projection roughly achieves a low-passing as well.

Moving on to template matching algorithms and Kilosort2, we've observed that low-passing actually significantly hurts. This is because spikes do have a significant amount of power at high-frequencies, which can increase the SNR of the template match. Now, you might ask how come the high-frequency noise doesn't hurt us in this case and the answer has to do with wavelets. A hand wavy explanation is that wavelets are very specific spatiotemporal filters, so they can reject most of the noise after thresholding.

I'm sure there are better mathematical explanations for this effect, but the empirical result for me has always been that low-passing hurts.

The CAR is a different problem, and it's in fact more similar to channel whitening, which we also do. That also has a big effect, because any waves that are big on many channels are typically noise, and only the sharp features of waveforms are really useful. Of course, if you have lots of channels, the CAR won't really eat into your spike variance and will mostly remove the synchronized noise, but even for fewer channels it should only have the same effect as whitening. Anyway, to avoid small Nchannels effect, we always advise people to process the entire experiment together, even if the channels have no chance of cross-talk,.

@djsoper
Copy link
Author

djsoper commented Jan 13, 2020

Thank you very much for your thorough response! I hadn't thought of the PCA as a method by which to Low-Pass Filter, but it makes sense to me. I'll have to do a little more digging to understand wavelets, but it's nice to know that the noise isn't seemingly impactful.

The CAR problem (and I'm not sure how impactful whitening is in this vein) still stands in my mind. Full Disclosure: we are doing intracranial experiments in humans with novel electrodes, so our use case is not perfect for this algorithm. A worry for me is that we are often manipulating the brain condition (e.g. cold saline, methohexital) which might introduce new baselines and create new waveforms/lose waveforms after a CAR. Do you think that these issues are avoidable with CAR? I'm also wondering about how you draw the line for "lots of channels." Our typical micros, which have the same contact distance as the neuropixel but a much higher SNR, have only 128 channels. Is that enough to mitigate the spike variance degradation? Is there an easy way to stop the whitening to see how impactful it is on our end spike products?

Thanks again for your very helpful response!
-Dan

@marius10p
Copy link
Contributor

128 channels should definitely be enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants