Skip to content
An open-source toolkit for large-scale genomic analysis
Scala Other
  1. Scala 98.9%
  2. Other 1.1%
Branch: master
Clone or download
kianfar77 Add tip about turning off WholeStageCodeGen to split_multiallelics tr…
…ansformer doc (#142)

* some changes

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* more

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* more doc

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* test

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* splitter test

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* splitter doc

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* missed

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* tip

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* notebook

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* bcftools link

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* bcftools link

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* bcftools link

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* info unflattened

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

* comments

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>

*  remember

Signed-off-by: kianfar77 <kiavash.kianfar@databricks.com>
Latest commit 1001d45 Jan 24, 2020
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci Add Python doc tests (#89) Dec 4, 2019
.github add pr template May 10, 2019
bin [HLS-299] Simple Python frontend for Spark genomics (#27) Jul 23, 2019
core/src Fix published POM to depend on correct version of htsjdk (#141) Jan 24, 2020
docs Add tip about turning off WholeStageCodeGen to split_multiallelics tr… Jan 25, 2020
project Explicitly support only Scala 2.11 and Spark 2.4.3 (#68) Nov 11, 2019
python Fix published POM to depend on correct version of htsjdk (#141) Jan 24, 2020
static
test-data Documentation for split_multiallelics transformer (#139) Jan 23, 2020
.gitignore CI improvements (#91) Dec 3, 2019
.readthedocs.yml Use conda env to build readthedocs (#45) Oct 23, 2019
.scalafmt.conf Rename everything to glow (#2) Oct 10, 2019
CODE-OF-CONDUCT.md Add contributor covenant code of conduct (#33) Oct 22, 2019
CONTRIBUTING.md Wee bit of spelling improvement to CONTRIBUTING.md (#119) Jan 22, 2020
LICENSE.txt Update license to Apache 2.0 license. (#28) Oct 19, 2019
README.md Improve docs formatting and add website info to README (#78) Dec 5, 2019
build.sbt Fix published POM to depend on correct version of htsjdk (#141) Jan 24, 2020
conftest.py Fix published POM to depend on correct version of htsjdk (#141) Jan 24, 2020
scalastyle-config.xml Add scalafmt to build (#20) Jul 15, 2019
stable-version.txt Update versions for release 0.2.0 (#99) Dec 10, 2019
version.sbt Update versions for release 0.2.0 (#99) Dec 10, 2019

README.md

An open-source toolkit for large-scale genomic analyes
Explore the docs »

Issues · Mailing list · Slack

Glow is an open-source toolkit to enable bioinformatics at biobank-scale and beyond.

Easy to get started

The toolkit includes the building blocks that you need to perform the most common analyses right away:

  • Load VCF, BGEN, and Plink files into distributed DataFrames
  • Perform quality control and data manipulation with built-in functions
  • Variant normalization and liftOver
  • Perform genome-wide association studies
  • Integrate with Spark ML libraries for population stratification
  • Parallelize command line tools to scale existing workflows

Built to scale

Glow makes genomic data work with Spark, the leading engine for working with large structured datasets. It fits natively into the ecosystem of tools that have enabled thousands of organizations to scale their workflows to petabytes of data. Glow bridges the gap between bioinformatics and the Spark ecosystem.

Flexible

Glow works with datasets in common file formats like VCF, BGEN, and Plink as well as high-performance big data standards. You can write queries using the native Spark SQL APIs in Python, SQL, R, Java, and Scala. The same APIs allow you to bring your genomic data together with other datasets such as electronic health records, real world evidence, and medical images. Glow makes it easy to parallelize existing tools and libraries implemented as command line tools or Pandas functions.

CircleCI Documentation Status PyPi Maven Central Coverage Status

Building and Testing

This project is built using sbt: https://www.scala-sbt.org/1.0/docs/Setup.html

Start an sbt shell using the sbt command.

To compile the main code:

compile

To run all tests:

test

To test a specific suite:

testOnly *VCFDataSourceSuite

To run Python tests, you must install conda and activate the environmet in python/environment.yml.

conda env create -f python/environment.yml
conda activate  glow

You can then run tests from sbt:

python/test

These tests will run with the same Spark classpath as the Scala tests.

If you use IntelliJ, you'll want to set up scalafmt on save.

To test or testOnly in remote debug mode with IntelliJ IDEA set the remote debug configuration in IntelliJ to 'Attach to remote JVM' mode and a specific port number (here the default port number 5005 is used) and then modify the definition of options in groupByHash function in build.sbt to

val options = ForkOptions().withRunJVMOptions(Vector("-Xmx1024m")).withRunJVMOptions(Vector("-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"))
You can’t perform that action at this time.