Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpreting Deconvolution Results #267

Closed
dba91 opened this issue Dec 28, 2019 · 4 comments
Closed

Interpreting Deconvolution Results #267

dba91 opened this issue Dec 28, 2019 · 4 comments

Comments

@dba91
Copy link

dba91 commented Dec 28, 2019

Hello, I had a question regarding how to interpret the results of the python suite2p deconvolution. It's not clear to me what the y-axis of the deconvolution result means.
Is the height of the spike suggesting that's the number of spikes occurring at that time point? That's what Step 3 from Figure 1 in this paper suggests:

"Step 3. Spike deconvolution is performed on the neuropil-corrected traces that represent the average activity of pixels inside an ROI. The result is a trace the same size as the fluorescence trace, containing estimates of the number of spikes in every bin."

On the other hand, I see others are thresholding the output to figure out when true spikes occur. Here another researcher on the MouseLand Issues page suggests as much: cortex-lab/Suite2P#157
And based on the original OASIS paper, I notice on Figure 5, mentions of thresholding can be helpful. (To be honest, I'm not sure I understand if the output of the python suite2p is the same as L1 on this graph, so any clarification would be helpful.)

I'm overall confused because the amplitude of the spikes I'm getting are into the hundreds and thousands. Which makes me think I need to threshold. But if I threshold, I don't have an actual sense of how many spikes occurred nor can I be confident comparing spiking activity between cells/sessions.

@pkells12
Copy link

Curious about the answer to this as well.

@marius10p
Copy link
Contributor

Good question, we should have clarified this somewhere in that review. There is an unknown scaling factor between fluorescence and # spikes, which is very hard to estimate. This is true both for the raw dF, or dF/F, and for the deconvolved amplitudes, which we usually treat as arbitrary units. The same calcium amplitude transient may have been generated by a single spike, or by a burst of many spikes, and for many neurons it is very hard to disentangle these, so we don't try. Few spike deconvolution algorithms try to estimate single spike amplitude (look up "MLspike"), but we are in general suspicious of the results, and usually have no need for absolute numbers of spikes.

As for the question of thresholding, we always recommend not to, because you will lose information. More importantly, you will treat 1-spike events the same as 10-spike events, which isn't right. There are several L0-based methods that return discrete spike times, including one we've developed in the past, which we've since shown to be worse than the vanilla OASIS method (read this). We do not use L1 penalties either, departing from the original OASIS paper, because we found that hurts in all cases (again, read this.

If you need to compare between cells, you would usually be comparing effect sizes, such as tuning width, SNR, choice index etc. which are relative quantities, i.e. firing rate 1 / firing rate 2. If you really need to compare absolute firing rates, then you need to normalize the deconvolved events by the F0 of the fluorescence trace, because the dF/F should be more closely related to absolute firing rate. Computing the F0 has problems in itself, as it may sometimes be estimated to be negative or near-zero for high SNR sensors like gcamp6 and 7. You could take the mean F0 before subtracting the neuropil and normalize by that, and then decide on a threshold to use across all cells, but at that point you need to realize these choices will affect your result and interpretation, so you cannot really put much weight on them. For these reasons, I would avoid making statements about absolute firing rates from calcium imaging data, and I don't know of many papers that make such statements.

@dba91
Copy link
Author

dba91 commented Jan 15, 2020

Hi Marius, This explanation is helpful, thank you. And you are right mostly we do need relative rates.
The situation I was imaging where absolute rates might be useful would be tracking cells across days. We might want to know if baseline activity of the cell changes across days. But I know the fluorescent properties of the cells can change over time so maybe this isn't very feasible/realistic. Perhaps the normalizing by F0 makes sense in these niche circumstances.

@carsen-stringer
Copy link
Member

carsen-stringer commented Mar 1, 2020

I've added parts of this discussion to a new FAQ here, going to close this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants