Replies: 2 comments
-
Are you imaging dendrites or other non-somatic shapes? The I would be wary of skipping deconvolution: even if you don't care about spike counts, it can help you get better estimates of C during the source extraction step especially for lower SNR traces (this was one of the points of the original paper on CNMF: simultaneous denoising, demixing, deconvolution 😃 ): https://pubmed.ncbi.nlm.nih.gov/26774160/. Negative values can be very hard to interpret. Have you tried toggling the |
Beta Was this translation helpful? Give feedback.
-
Thanks for your reply. Sorry for my slow reply; I wanted to test a few things first. I am imaging non-somatic shapes, and I skip deconvolution by setting p=0 because my neurons are non-spiking. I see the above problem with method_init 'greedy_roi', and 'sparse_nmf'. If I use deconvolution by setting p=1 or p=2, and I set bas_nonneg=False, then I do not see the above problem. So thank you for that suggestion. But since my neurons are nonspiking, I do set p=0, and bas_nonneg seems to have no effect if p=0. My input movie is nonnegative, and it looks like negative values in the temporal component are introduced in line 214 of temporal.py, in function 'update_temporal_components', in this line: Can you help me understand the purpose of these lines (the first one, especially)? What is the meaning of the negative values after the first line? I don't understand why so much of the temporal component is getting clipped at the bottom. As I describe above, the clipped signal does not seem to be noise. Is it possible to make bas_nonneg=False have an effect when p=0? Or will that cause problems? Thank you! |
Beta Was this translation helpful? Give feedback.
-
Hi
I use CNMF.fit, then CNMF.refit, to extract components from 2-photon GCaMP7f movies.
I use method_init graph_nmf. I skip deconvolution by setting p to 0. I use patches because SNR varies across the FOV.
Below is a temporal component (estimates.C) plotted with estimates.view_components.
It shows visual stimulus-evoked responses from a neuron. It is typical of my extracted traces.
The “inferred trace” (red trace), is estimates.C
The “filtered raw data” (blue trace) is estimates.C + residuals
The positive parts of the blue trace are obscured by the red trace.
A linear function of the visual stimulus predicts the “filtered raw data” very well, but not the “inferred trace”
I expect this linear function to predict this neuron’s responses, so I’ve been using this model as a positive control of my source extractions as I adjust CaImAn input parameters.
It’s clear from looking at the predictions that the model of the “filtered raw data” is better because it includes negative values.
Because of this, I don’t think these negative values are noise.
Based on my modeling, and my expectations, these negative values contain a similar amount of visually-evoked “signal” as the positive values.
But I thought that negative values in estimates.C represent noise.
Currently, in all my analysis, I use estimates.F_dff computed with use_residuals set to true. This gives me normalized traces with the negative parts preserved (i.e., it gives me something similar to a scaled version of the blue trace). And this gives me data that makes sense.
Can you help me understand these negative values, or why my “inferred trace” seems to be throwing away so much “signal” below zero?
Might my CaImAn input parameters be adjusted to make the “inferred trace” include this “signal”?
Thank you!
Carl
Beta Was this translation helpful? Give feedback.
All reactions