Poster presented at RECOMB 2017
Switch branches/tags
Nothing to show
Clone or download
Permalink
Failed to load latest commit information.
poster fix images May 4, 2017
01.intro.md add text May 4, 2017
02.saving.md Link to lab presentation May 5, 2017
03.sharing.md add text May 4, 2017
04.future.md remove empty section May 4, 2017
LICENSE Add poster, license and readme May 1, 2017
README.md add text May 4, 2017
abstract.md Remove comments, add keywords Apr 26, 2017

README.md

Decentralized indexes for public genomic data

Luiz Carlos Irber Júnior, C. Titus Brown, Tim Head

  • Department of Population Health and Reproduction, University of California, Davis, USA
  • Head's Wild Tree Tech, Switzerland

Poster presented at RECOMB 2017.

Abstract

MinHashes can be used to estimate the similarity of two or more datasets. Expanding on the work pioneered by mash and extended in our library Sourmash, we calculated signatures for 412 thousand microbial reads datasets on the Sequence Read Archive. To be able to efficiently search for matches of these signatures in the RefSeq microbial genomes database we developed a new data structure based on Sequence Bloom Trees adapted for searching MinHash signatures (named SBTMH) and made it available to whoever wants to use it.

We explore how to encode the SBTMH structure as objects in a MerkleDAG and store it in IPFS (InterPlanetary File System), a decentralized system for data sharing, and how to load and spread the SBTMH indexes as well as the signatures calculated. The SBTMH behaves like a persistent data structure, where updates and new nodes can share parts of the structure of previous versions of the SBTMH. While this property can be used to avoid duplicating data, in this case it is important because it allows common nodes in trees to be shared, leading to increased availability and facilitating sharing and remixing of indexes and signatures.

This design can be extended to change how databases and archives (like the SRA) are offered and implemented, since users can collaborate by choosing to share subsets of the archive and spread the network bandwidth. More importantly, it avoids the central point of failure, while still allowing for curation and quality assurance of the data. We present a prototype showing how this can be achieved.

Table of Contents

References