Provides a simple genome browser that pulls data from the API in Python.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.




This Python client demonstrates a simple web-based genome browser that fetches data from the Google Genomics API, the NCBI Genomics API or the Local Readstore through a web interface, and displays a pileup of reads with support for zooming and basic navigation and search.

You can try out the sample genome browser, called GABrowse, now by going to

The code in this repository can be run with with Google App Engine or locally on your own laptop or workstation under a development server before deploying to the internet.

It can also be run locally without App Engine using the Python paste web application framework.

Set up a Google Cloud project

If you will run this application under App Engine (local or remote) or you will access data in Google Genomics, you must set up a Google Cloud Platform project.

  1. Follow instructions here to create a new project
  2. Follow instructions here to enable the Genomics API
  3. Follow instructions here to find your Cloud "project ID"

You will need your project ID if you deploy to App Engine.

  1. Follow instructions here to install and authorize the Cloud SDK

The web application uses Application Default Credentials to authorize requests to the Google Genomics API.

  • When running the web application locally, it will use your Cloud SDK user credentials.
  • When running on App Engine, it will use the Cloud Project's App Engine Service Account.

Running on App Engine

Google App Engine provides an application framework for internet-based web applications.

Running on the App Engine Development Server

To run the application on the development server, you will:

  1. Download the App Engine SDK
  2. Install Google's OAuth client libraries
  3. Launch the development server
  4. Open the application URL in your browser

1. Download the App Engine SDK

Read about and follow the instructions for downloading and installing the Google App Engine SDK for Python

2. Install Google's OAuth client libraries

The App Engine environment allows for pure python libraries to be used at runtime. Documentation can be found here.

For this application execute the following in the root of your local copy:

mkdir lib
pip install -t lib --upgrade oauth2client

This will install the oauth2client and all of its dependencies (including httplib2).

3. Launch the development server

On Mac OS X you can set up and run the application through the GoogleAppEngineLauncher UI. To use the command line or to run on Linux: .

To run on Windows:

python c:\path\to\ .

4. Open the application URL in your browser

Once running, visit http://localhost:8080 in your browser to browse data from the API.

Running on the App Engine Production Server

To deploy this application to App Engine, execute the following command: -A YOUR_PROJECT_ID -V v1 update .

Replace YOUR_PROJECT_ID with the project of your Google Cloud Project.

Once running, visit in your browser to browse data from the API.

Running with paste and webapp2

You can also run the server locally using the Python paste web server framework.

It is highly recommended that you install Python libraries in a virtualenv. This allows you to contain your installation and dependent libraries in one place.

The instructions here explicitly use a Python virtualenv and have only been tested in this environment.

1. Install pip

If you do not already have pip installed, you can find instructions here.

2. Install virtualenv

If you have not installed virtualenv, then do so with:

[sudo] pip install virtualenv

3. Create a virtualenv

Create a virtualenv called localserver_libs:

virtualenv localserver_libs

4. Activate the virtualenv

source localserver_libs/bin/activate

5. Install dependent libraries

Install the required dependencies:

pip install WebOb Paste webapp2 jinja2
pip install urllib3[secure] httplib2shim
pip install --upgrade oauth2client

6. Run the file



  • The Unable to bind message means that one of the default App Engine ports is unavailable. The default ports are 8080 and 8000. You can try different ports with these flags:
python --port 12080 --admin_port=12000 .

Your server will then be available at localhost:12080.

  • Problem with a non-Chrome browser?

Please file an issue. jQuery and d3 get us a lot of browser portability for free - but testing on all configurations is tricky, so just let us knowif there are issues!

Code layout
queries the Genomics API. It also serves up the HTML pages.
is the main HTML page. It provides the basic page layout, but most of the display logic is handled in JavaScript.
provides some JS utility functions, and calls into readgraph.js.
handles the visualization of reads. It contains the most complex code and uses d3.js to display actual Read data.

The python client also depends on several external libraries:

is a javascript library used to make rich visualizations
is a javascript library that provides a variety of utilities
supplies a great set of default css, icons, and js helpers

In main.html, jQuery is also loaded from an external site.

Project status


  • Provide an easily deployable demo that demonstrates what Genomics API interop can achieve for the community.
  • Provide an example of how to use the Genomics APIs to build a non-trivial python application.

Current status

This code wants to be in active development, but has few contributions coming in at the moment.

Currently, it provides a basic genome browser that can consume genomic data from any API provider. It deploys on App Engine (to meet the 'easily deployable' goal), and has a layman-friendly UI.

Awesome possible features include:

  • Add more information to the read display (show inserts, highlight mismatches against the reference, etc)
  • Possibly cleaning up the js code to be more plugin friendly - so that pieces could be shared and reused (d3 library? jquery plugin?)
  • Staying up to date on API changes (readset searching now has pagination, etc)
  • Better searching of Snpedia (or another provider - EBI?)
  • Other enhancement ideas are very welcome
  • (for smaller/additional tasks see the GitHub issues)