Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What amount of drift and fr / amplitude changes are expected during day to day alignment? #397

Closed
achristensen56 opened this issue May 12, 2021 · 6 comments · Fixed by #595

Comments

@achristensen56
Copy link

Hi All!

Similar to other discussions on tracking, I was wondering if we could have a discussion about the amount of drift and noise that's expected from day to day, and what amount is indicative of a probe with too much drift to successfully do day to day alignment.

I've read the NP2.0 paper and looked at those examples, but like all papers I assume they are rather best case scenarios. Would people be willing to share some examples of "good" day to day alignment drift maps and neural features, and "bad" features, and qualitatively how you make those decisions? We can of course use the quantitative metrics in the KS/NP paper, but we'd like also to get a qualitative feel for this type of data, and what quality we should be aiming for based on community wisdom.

I'm attaching the KS output from a day-to-day alignment we recently attempted. These sessions were recorded on consecutive days, with a chronic probe in an awake behaving rat. For stupid technical failures of our recording setup, we also had to align many individual sessions in each day -- but as you can see the large shift in the middle is "day to day" and is much more poorly registered than the almost imperceptible shift in the different sub sessions on the same day. All told the data below is 5 separate sessions aligned, across two different days.

fig2
fig1
fig3

Here are some screenshots from Phy of the resulting sorting output. I've attached a neurosn that seems 'good', and one that seems questionable. In all cases there are basically zero ISI violations, amplitude is > 100, and min fr > .1
Good:
phy_screenshot_20210512084053_AmplitudeView
phy_screenshot_20210512084054_FiringRateView
phy_screenshot_20210512084056_WaveformView
Questionable:
phy_screenshot_20210512084117_AmplitudeView
phy_screenshot_20210512084119_FiringRateView
phy_screenshot_20210512084121_FeatureView
phy_screenshot_20210512084122_WaveformView

Very interested in any insight the community might have, in particular if you have suggestions for how to improve our day to day alignment!

@achristensen56
Copy link
Author

Hi all,

I just wanted to continue this conversation, as I've done some more experiments which basically lead me to more questions than answers.

If I take recordings on two subsequent days and just look at the activity on each channel it looks totally identical. Including the distribution of noise, spikes, etc. I mean really on every channel it looks exactly the same. It could have been the same recording file, in terms of the overall statistics and general properties on each channel. So great, super stable recording.

Except when I use KS2.5 on the concatenated files, it looks even worse sometimes than what I pasted above. Like 60 um shift, and subjectively no alignment on the spikes vs. depth plot. But a 60um shift shouldn't be possible, since I know every single channel is basically perfectly stable. So what gives?? Very confused. We are considering just running this with no registration at all, but that doesnt seem ideal... Has anyone run into a similar circumstance?

@marius10p you mentioned alignment works the best on KS2.0 in another thread? What's the failure mechanism of 2.5 and 3.0? Have you seen this before where qualitatively totally stable recordings are way too shifted by KS2.5? Any suggestions for parameters I should change?

@achristensen56
Copy link
Author

Another update (maybe someday this will be useful to someone!)

running KS2.5 with rigid registration seems to have more or less fixed our alignment issues. Infact, even single day recordings look better with rigid registration.

@TomBugnon
Copy link

Hi @achristensen56

Hearing this last point I suspect that this issue is related to something I've noticed as well when working with a dataset with very large (non-rigid) drift (up to 150um). I've meant to open an issue as well but haven't finished double-checking my work.

I notice two things that might cause some issue with datasets with large drift, in particular non-rigid drift. I'm not sure it applies to your case but you might want to have a look anyway:

  • the maximum possible drift for the whole probe and (resp.) for each block of cahnnels after aligning the whole probe to the template are hardcoded in align_block2 ( here and here ).

  • In align_block2, at each step of the main loop of rigid alignment, the template to which each batch's fingerprint is aligned is obtained by iteratively averaging the rigidly aligned fingerprints. There's then a final step which (non-rigidly) aligns each block*batch's fingerprint to the (rigid) template. Because the non-rigid alignment is not reflected in the template, the target fingerprint looks pretty bad in regions exhibiting large amplitude non-rigid (local) drift, which impairs the alignment of each block.
    I've "fixed" the algorithm by adding a loop of non-rigid alignment during which the template is updated. The corresponding commit is here, feel free to give it a shot, I'm curious whether it helps: CSC-UW@70dbf19 I'm not done testing it though
    Here are the templates returned by align_block with the default vs modified algorithm (better algnment on the bottom)
    image

Another possibility is: https://github.com/evarol/NeuropixelsRegistration , which I suppose/hope should be integrated with kilosort eventually

Cheers

@achristensen56
Copy link
Author

Hi @TomBugnon
thanks for the comment! I'm so sorry I missed this response. I will look into trying out your version of the registration or the neuropixels registration code you linked! Thanks so much .
Amy

@achristensen56
Copy link
Author

Update @TomBugnon I tried your non-rigid target code, unfortunately it doesn't fix my issues. Even when the drift map allignment looks good by eye KS often (basically always!) splits cells into day1 and day2. I can merge them in phy but that's not ideal.

@TomBugnon
Copy link

@achristensen56 Sorry to hear. Would you care to share the drift map (or ideally some zoom of it) with both versions of the algorithm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants