High-throughput gene to knowledge mapping through massive integration of public sequencing data.
Switch branches/tags
Nothing to show
Clone or download
Permalink
Failed to load latest commit information.
.ipynb_checkpoints updated: added all notebook and python scripts Oct 16, 2018
Analysis updated: added all notebook and python scripts Oct 16, 2018
AnalyzeReferences updated: new files Aug 5, 2018
BS_aling updated: new files Aug 5, 2018
Chip-seq updated: before changing to rmdup Aug 5, 2018
Debug updated: added all notebook and python scripts Oct 11, 2018
DownloadGenome updated: new files Aug 5, 2018
Figures updated: README Aug 19, 2018
Microbiome updated: added all notebook and python scripts Oct 14, 2018
NLP_spacy updated: new files Aug 5, 2018
Pipelines updated: added all notebook and python scripts Oct 16, 2018
RNAseq updated: added all notebook and python scripts Oct 14, 2018
SRA_META updated: added all notebook and python scripts Oct 14, 2018
Skymap_legacy-master updated: before changing to rmdup Aug 5, 2018
XGS_WGS updated: added all notebook and python scripts Oct 11, 2018
clean_notebooks/ExampleDataLoading updated: added all notebook and python scripts Oct 16, 2018
conda_envs updated: conda envs dir Oct 11, 2018
jupyter-notebooks/clean_notebooks updated: README Jul 7, 2018
DataTimeStamp.ipynb updated: added all notebook and python scripts Oct 14, 2018
ExampleDataLoading.zip modified : ExampleDataLoading.zip Aug 12, 2018
ISMB_poster_Skymap.pdf updated: README Jul 23, 2018
README.ipynb updated: added all notebook and python scripts Oct 14, 2018
README.md updated: README Oct 14, 2018
README_.ipynb updated: added all notebook and python scripts Oct 11, 2018
gitpush.ipynb updated: added all notebook and python scripts Oct 16, 2018

README.md

Click here for quick-start to go from data slicing to publication figures in < 2 minutes

Table of Contents

(Links are clickable if u open the README.ipynb in JupyterNotebook)

Feel free to contact me @: btsui@eng.ucsd.edu (I will try to reply within 3 days)

Summary

Skymap is a standalone database that aims to offer:

  1. a single data matrix for each omic layer for each species that spans a total of >400k sequencing runs from all the public studies, which is done by reprocessing petabytes worth of sequencing data. Here is how much data we have reprocessed from the SRA: alt text
  2. a biological metadata file that describe the relationships between the sequencing runs and also the keywords extracted from over 3 million freetext annotations using NLP.
  3. a technical metadata file that describes the relationships between the sequencing runs.

Solution: three tables to related > 100k experiments: For examples, all the variant data and the data columns can be interpolated like this: alt text

Where they can all fit into your personal computer.

Click here to check out the quick start page and start playing around with the data

quick-installation-10-mins

  1. install miniconda/anaconda with python version >=3.4 (won't work with python 2)

  2. Copy and paste to run this following line in unix terminal

    • conda create --yes -n skymap jupyter python=3.6 pandas=0.23.4 && source activate skymap && jupyter-notebook
  3. Click me to download the examples notebooks

  4. Choose one of the following notebooks to run. The code will automatically update your python pandas, create a new conda environment if necessary.

    • loadVariantDataBySRRID.ipynb: requires 1GB of disk space and 5GB of RAM.
    • loadingRNAseqByGene.ipynb: requires 20GB of disk space and 1GB RAM.
  5. Click "Run All" to execute all the cells. The notebook will download the example data, install the depedencies and run the data query example.

Check here for more info on executing jupyter notebook

Diagnosis:

  • If you run into errors from packages, try the versions I used: python v3.6.5, pandas v0.23.4, synapse client v1.8.1.
  • If sage synapse download fails, download the corresponding python pandas pickle using the web interface instead (https://www.synapse.org/#!Synapse:syn11415602/files/) and read in the pickle using pandas.read_pickle.

Data directory and loading examples

I tried to keep the loading to be as simple as possible. The jupyter-notebook each have <10 lines of python codes and package dependency on python pandas only. The memory requirement are all less than 5G.

-omic data

Title Data URL Jupyter-notebook loading examples Format Uses
Loading allelic read counts by SRR (SRA sequencing run) ID https://www.synapse.org/#!Synapse:syn15624400 click me to view python pandas pickle dataframe Variant, CNV detection
Expression matrices https://www.synapse.org/#!Synapse:syn11415787 click me to view numpy array Expression level quantification
Read coverage - availability depending upon demand - ChIP Peak detection
Microbe quantification - availability depending upon demand - Microbiome community detection

Metadata

All the metadtata files are located at sage synapse folder: https://www.synapse.org/#!Synapse:syn15661258

Title File name Jupyter-notebook loading examples Format
biospecieman annotations allSRS.pickle.gz click me to view python pandas pickle dataframe
experimental annotations allSRX.pickle.gz click me to view python pandas pickle dataframe
biospeiciman experimental and sequencing runs mappings. sequencing and QC stats sra_dump.fastqc.bowtie_algn.pickle click me to view python pandas pickle dataframe

Axulilary

Title File name
Distribution of data processed over time checkProgress.ipynb
Generate RNAseq references generateReferences.ipynb

Example jupyter notebook analysis using reprocessed data

Locating variant and correlating with RNAseq and metadata

This is probably the best example that give you an idea on how to go from data slicing in Skymap to basic data analysis.

jupyter notebook link

High resolution mouse developmental hierachy map

jupyter notebook link

Aggregating many studies (node) to form a smooth mouse developmental hierachy map. By integrating the vast amount of public data, we can cover many developmental time points, which sometime we can see a more transient expression dynamics both across tissues and within tissues over developmental time course.

Each componenet represent a tissue. Each node represent a particular study at a particular time unit. The color is base on the developmental time extracted from experimental annotation using regex. The node size represent the # of sequencing runs in that particulr time point and study. Each edge represent a differentiate-to or part-of relationship. alt text And you can easily overlay gene expression level on top of it. As an example, Tp53 expression is known to be tightly regulated in development. Let's look at the dynamic of Tp53 expression over time and spatial locations in the following plot. alt_text

Simple RNAseq data slicing and hypothesis testing

jupyer notebook link

Methods

Slides

Google docs and slides with links pointing to jupyter-notebooks: The numbers from the jupyter notebooks will be different from the manuscript as there are more data being incoperated everyday. The hope is that it can help you understand each number and figures from the manuscript.

Title Mansucript URL Figures URL
Extracting allelic read counts from 250,000 human sequencing runs in Sequence Read Archive https://docs.google.com/document/d/1BGGQOpWczOwan9STqs-J9zpa8A-aj4aJ1RND_qKzRFs https://docs.google.com/presentation/d/1dERUDHh2ab8UdPaHa-ki-8RMae6yi2eYJQM4b7ArVog
Meta-analysis using NLP (Metamap) and reprocessed RNAseq data https://docs.google.com/presentation/d/14vLJJQ6ziw-2aLDoQAJGyv1sYo5ENzljsqsbZr9jNLM
Title google docs google slides
Extracting allelic read counts from 250,000 human sequencing runs in Sequence Read Archive https://docs.google.com/document/d/1BGGQOpWczOwan9STqs-J9zpa8A-aj4aJ1RND_qKzRFs https://docs.google.com/presentation/d/1dERUDHh2ab8UdPaHa-ki-8RMae6yi2eYJQM4b7ArVog

Unpublished but ongoing manuscripts

Title google doc
Meta-analysis using NLP (Metamap) and reprocessed RNAseq data https://docs.google.com/document/d/1_nES7vroX7lCwf5NSNBVZ1k2iubYm5wLeFqusq5aZuk

Pipeline

The way I organized the code is trying to keep the code as simple as possible. For each pipeline, it has 6 scripts, <500 lines each to ensure readability. Run each pipeliine starting with calcuate_uprocessed.py, which calculate the number of files still require for processing.

If you happen to want to make a copy of the pipeline:

  • make a copy of the pipeline by cloning this github repo,

  • conda env create -n environment_conda_py26_btsui --force -f ./conda_envs/environment_conda_py26_btsui.yml

  • conda env create -n environment_conda_py36_btsui --force -f ./conda_envs/environment_conda_py36_btsui.yml

  • For Python 2 codes, source activate environment_conda_py26_btsui before running

  • For Python 3 codes, source activate environment_conda_py36_btsui before running

Repalce my directory (/cellar/users/btsui/Project/METAMAP/code/metamap/)with your directory if you wanna run it.

Internal: login to an nrnb-node to run the following notebooks.

Here are the scripts:

Coverage

Code
calculating reads coverage

Metadata layout axuliary

|Column | meaning| |: ---: | :---| | new_ScientificName | the string which the pipeline will use for matching with the reference genome as the species | ScientificName | original scientific name extracted from NCBI SRS|

Acknowledgement

Please considering citing if you are using Skymap. (https://www.biorxiv.org/content/early/2018/08/07/386441)

We want to thank for the advice and resources from Dr. Hannah Carter (my PI), Dr. Jill Mesirov, Dr. Trey Ideker and Shamin Mollah. We also want to thank Dr. Ruben Arbagayen, Dr. Nate Lewis for their suggestion. The method will soon be posted in bioarchive. Also, we want to thank the Sage Bio Network for hosting the data. We also thank to thank the NCBI for holding all the published raw reads at Sequnece Read Archive.

There are also many people who help tested Skymap: Ben Kellman, Rachel Marty, Daniel Carlin, Spiko van Dam.

Grant money that make this work possible: NIH DP5OD017937,GM103504

Term of use: Use Skymap however you want. Just dont sue me, I have no money.

For why I named it Skymap, I forgot.

Data format and coding style

The storage is in python pandas pickle format. Therefore, the only packges you need to load in the data is numpy and pandas, the backbone of data analysis in python. We keep the process of data loading as lean as possible. Less code means less bugs and less errors. For now, Skymap is geared towards ML/data science folks who are hungry for the vast amount of data and ain't afraid of coding. I will port the data to native HDF5 format to reduce platform dependency once I get a chance.

I tried to keep the code and parameters to be lean and self-explanatory for your reference.

References

ISMB 2018 poster: https://github.com/brianyiktaktsui/Skymap/blob/master/ISMB_poster_Skymap.pdf

Preprint on allelic read counts: https://www.synapse.org/#!Synapse:syn11415602/files/

Data: https://www.synapse.org/#!Synapse:syn11415602/files/

Manuscripts in biorxiv related to this project

Title URL to manuscript github
Extracting allelic read counts from 250,000 human sequencing runs in Sequence Read Archive https://www.biorxiv.org/content/biorxiv/early/2018/08/07/386441.full.pdf
Deep biomedical named entity recognition NLP engine https://www.biorxiv.org/content/early/2018/09/12/414136 https://github.com/brianyiktaktsui/DEEP_NLP
!ls -lath /nrnb/users/btsui/Data/merged/snp/hg38/mergedBySRR_smallerChunk/  |head 
total 59G
drwxr-xr-x 2 btsui users  128K Oct  4 14:15 .
-rw-r--r-- 1 btsui users  3.8M Oct  4 14:15 1581000.pickle.gz
-rw-r--r-- 1 btsui users  8.6M Oct  4 14:15 1558000.pickle.gz
-rw-r--r-- 1 btsui users  822K Oct  4 14:15 1524000.pickle.gz
-rw-r--r-- 1 btsui users  760K Oct  4 14:15 1543000.pickle.gz
-rw-r--r-- 1 btsui users  2.3M Oct  4 14:15 1572000.pickle.gz
-rw-r--r-- 1 btsui users  3.7M Oct  4 14:15 1589000.pickle.gz
-rw-r--r-- 1 btsui users  726K Oct  4 14:15 1590000.pickle.gz
-rw-r--r-- 1 btsui users  3.4M Oct  4 14:15 1520000.pickle.gz
ls: write error: Broken pipe