-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit the number of clusters #67
Comments
Probably. What was the output of Kilosort2 towards the end of the optimization? Split units will have a similarity of exactly "1" in Phy. You can reduce splitting propensity by increasing ccsplit, i.e. making the separation criterion for splitting more stringent. |
Can I set ops.Nfilt=5 for example if I use 32 channels and sure about maximal number of clusters? |
Yes, but it will still do splits and merges at the end which might increase your number. |
Hello I am having a similar issue with KS3 when running on data from a 24Ch linear probe (Uprobe) - I am ending with >120 clusters, often >200. I have tried changing "ops.Nfilt" with little effect. I have also played a bit with the AUCsplit (using values from 0.5 to 0.99) with little effect on cluster numbers - any advice would be most welcome. Many thanks |
ops.Nfilt isn't actually used. It's defined in preprocessDataSub.m:
so you would have to change ops.nfilt_factor if you want to set an upper limit. |
Hi,
I find that I get a bit too many clusters and end up having to merge or discard a significant number.
Is there a way to limit how many templates are initialized and used in the optimization process? In Kilosort 1 this was done via the ops.Nfilt parameter.
In Kilosort 2 this parameter also exists under the same name:
ops.Nfilt = 1024; % max number of clusters
However, even though I set it to e.g. 480 (32*15 so still multiple of 32) I ended up with 732 clusters. Is that because in the splitting stage KS2 found that it was better to split these clusters even though it started with 480?
Thanks!
The text was updated successfully, but these errors were encountered: