-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fast clustering of many large sketches - kspider #2271
Comments
I will be working on updating the docs to include the latest updates of kSpider dev and will add some tutorials on how to run it on sourmash sigs. Will update this issue when I am done. |
This was referenced Feb 26, 2024
Open
bluegenes
added a commit
to sourmash-bio/sourmash_plugin_branchwater
that referenced
this issue
Feb 27, 2024
This PR adds a new command, `cluster`, that can be used to cluster the output from `pairwise` and `multisearch`. `cluster`uses `rustworkx-core` (which internally uses `petgraph`) to build a graph, adding edges between nodes when the similarity exceeds the user-defined threshold. It can work on any of the similarity columns output by `pairwise` or `multisearch`, and will add all nodes to the graph to preserve singleton 'clusters' in the output. `cluster` outputs two files: 1. cluster identities file: `Component_X, name1;name2;name3...` 2. cluster size histogram `cluster_size, count` context for some things I tried: - try using petgraph directly and removing rustworkx dependency > nope,`rustworkx-core` adds `connected_components` that returns the connected components, rather than just the number of connected components. Could reimplement if `rustworkx-core` brings in a lot of deps - try using 'extend_with_edges' instead of add_edge logic. > nope, only in `petgraph` **Punted Issues:** - develop clustering visualizations (ref @mr-eyes kSpider/dbretina work). Optionally output dot file of graph? (#248) - enable updating clusters, rather than always regenerating from scratch (#249) - benchmark `cluster` (#247) > `pairwise` files can be millions of lines long. Would it be faster to parallel read them, store them in an `edges` vector, and then add nodes/edges sequentially? Note that we would probably need to either 1. store all edges, including those that do not pass threshold) or 2. After building the graph from edges, add nodes from `names_to_node` that are not already in the graph to preserve singletons. Related issues: * #219 * sourmash-bio/sourmash#2271 * sourmash-bio/sourmash#700 * sourmash-bio/sourmash#225 * sourmash-bio/sourmash#274 --------- Co-authored-by: C. Titus Brown <titus@idyll.org>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@mr-eyes has been working steadily on using kspider (docs and repo) to cluster many large collections of k-mers, and has achieved some impressive results.
This issue is b/c I wanted to link some of the kSpider work into this repo so that it was discoverable by sourmash aficionados!
@mr-eyes if you have a tutorial or some guidance for people wanting to try out kSpider with sourmash sketches, please point to it here!
The text was updated successfully, but these errors were encountered: