You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in my understanding, although Neuropixels probes are not fully linear, channel indices correlate with electrode depth. Hence, if a spike has a large peak/trough in channel ic, one can expect to find peaks in nearby channels by indexing around ic (e.g. in the interval [ic-6, ic + 6]). This assumption is used in some pre-processing functions, like my_min or my_sum.
Is this assumption also implicitly used during the optimization (in CUDA functions), and could it affect sorting results for different array geometries? Or is the proximity problem completely handled by finding the closest channels for each electrode through the channel map? The relevant function name is getClosestChannels, and its main output is actually passed to the CUDA functions.
In our ex-vivo retina experiments, we use square arrays (16x16) with relatively large inter-electrode distances (~100 um). Thus, the signals from single units are usually seen in 1-2 electrodes around the main electrode. However, indexing does not correlate that well with proximity. In the example below, 129 is the main electrode, but there is signal also in 113 and 145, which are quite apart in terms of "linear" indices:
Although KS2 tends to work well, sometimes units get unreasonably split in many parts during the optimization, and I am trying to find potential reasons.
Thanks!
The text was updated successfully, but these errors were encountered:
Some functions in the preprocessing may still be using that kind of linear indexing, but the main algorithm doesn't. If it looks like something goes wrong in the preprocessing (i.e. up to and including the re-sorting of batches in time), then it could be due to this.
Hi Marius,
in my understanding, although Neuropixels probes are not fully linear, channel indices correlate with electrode depth. Hence, if a spike has a large peak/trough in channel ic, one can expect to find peaks in nearby channels by indexing around ic (e.g. in the interval [ic-6, ic + 6]). This assumption is used in some pre-processing functions, like my_min or my_sum.
Is this assumption also implicitly used during the optimization (in CUDA functions), and could it affect sorting results for different array geometries? Or is the proximity problem completely handled by finding the closest channels for each electrode through the channel map? The relevant function name is getClosestChannels, and its main output is actually passed to the CUDA functions.
In our ex-vivo retina experiments, we use square arrays (16x16) with relatively large inter-electrode distances (~100 um). Thus, the signals from single units are usually seen in 1-2 electrodes around the main electrode. However, indexing does not correlate that well with proximity. In the example below, 129 is the main electrode, but there is signal also in 113 and 145, which are quite apart in terms of "linear" indices:
Although KS2 tends to work well, sometimes units get unreasonably split in many parts during the optimization, and I am trying to find potential reasons.
Thanks!
The text was updated successfully, but these errors were encountered: