Skip to content
Comparison of methods for trajectory inference on single-cell data πŸ₯‡
Branch: master
Clone or download
rcannood Merge pull request #31 from dynverse/devel
Version number bumping for dyno 1.0.0 release
Latest commit e16b237 Apr 7, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
derived update readmes Sep 18, 2018
manuscript
package
raw
results @ 1ac55e6 update results, fix #26 Mar 4, 2019
scripts update gource code Apr 1, 2019
.gitattributes bye bye gitannex Aug 24, 2018
.gitignore ignore renviron file Oct 4, 2018
.gitmodules use git@ url for results submodule Aug 27, 2018
.travis.yml fix travis yml Apr 7, 2019
README.Rmd add dynverse dependencies Mar 28, 2019
README.md Update README.md Apr 3, 2019
dynbenchmark.Rproj move rproj Jun 27, 2018

README.md

Build Status Lifecycle doi ℹ️ Tutorials Β 

Benchmarking trajectory inference methods

This repo contains the scripts to reproduce the manuscript

A comparison of single-cell trajectory inference methods Wouter Saelens* , Robrecht Cannoodt* , Helena Todorov , Yvan Saeys
doi:10.1038/s41587-019-0071-9 altmetric

Dynverse

Under the hood, dynbenchmark makes use of most dynverse package for running the methods, comparing them to a gold standard, and plotting the output. Check out dynverse.org for an overview!

Experiments

From start to finish, the repository is divided into several experiments, each with their own scripts and results. These are accompanied by documentation using github readmes and can thus be easily explored by going to the appropriate folders:

# id scripts results
1 Datasets πŸ“„βž‘ πŸ“Šβž‘
2 Metrics πŸ“„βž‘ πŸ“Šβž‘
3 Methods πŸ“„βž‘ πŸ“Šβž‘
4 Method testing πŸ“„βž‘ πŸ“Šβž‘
5 Scaling πŸ“„βž‘ πŸ“Šβž‘
6 Benchmark πŸ“„βž‘ πŸ“Šβž‘
7 Stability πŸ“„βž‘ πŸ“Šβž‘
8 Summary πŸ“„βž‘ πŸ“Šβž‘
9 Guidelines πŸ“„βž‘ πŸ“Šβž‘
10 Benchmark interpretation πŸ“„βž‘ πŸ“Šβž‘
11 Example predictions πŸ“„βž‘ πŸ“Šβž‘
12 Manuscript πŸ“„βž‘ πŸ“Šβž‘
Varia πŸ“„βž‘

We also have several additional subfolders:

  • Manuscript: Source files for producing the manuscript.
  • Package: An R package with several helper functions for organizing the benchmark and rendering the manuscript.
  • Raw: Files generated by hand, such as figures and spreadsheets.
  • Derived: Intermediate data files produced by the scripts. These files are not git committed.

Guidelines

Based on the results of the benchmark, we provide context-dependent user guidelines, available as a shiny app. This app is integrated within the dyno pipeline, which also includes the wrappers used in the benchmarking and other packages for visualising and interpreting the results.

dynguidelines

Datasets

The benchmarking pipeline generates (and uses) the following datasets:

  • Gold standard single-cell datasets, both real and synthetic, used to evaluated the trajectory inference methods DOI

datasets

  • The performance of methods used for the results overview figure and the dynguidelines app.

  • General information about trajectory inference methods, available as a data frame in dynmethods::methods

Methods

All wrapped methods are wrapped as both docker and singularity containers. These can be easily run using dynmethods.

Installation

dynbenchmark has been tested using R version 3.5.1 on Linux. While running the methods also works on on Windows and Mac (see dyno), running the benchmark is currently not supported on these operating system, given that a lot of commands are linux specific.

In R, you can install the dependencies of dynbenchmark from github using:

# install.packages("devtools")
devtools::install_github("dynverse/dynbenchmark/package")

This will install several other β€œdynverse” packages. Depending on the number of R packages already installed, this installation should take approximately 5 to 30 minutes.

On Linux, you will need to install udunits and ImageMagick:

  • Debian / Ubuntu / Linux Mint: sudo apt-get install libudunits2-dev imagemagick
  • Fedora / CentOS / RHEL: sudo dnf install udunits2-devel ImageMagick-c++-devel

Docker or Singularity (version β‰₯ 3.0) has to be installed to run TI methods. We suggest docker on Windows and MacOS, while both docker and singularity are fine when running on linux. Singularity is strongly recommended when running the method on shared computing clusters.

For windows 10 you can install Docker CE, older Windows installations require the Docker toolbox.

You can test whether docker is correctly installed by running:

dynwrap::test_docker_installation(detailed = TRUE)
## βœ” Docker is installed

## βœ” Docker daemon is running

## βœ” Docker is at correct version (>1.0): 1.39

## βœ” Docker is in linux mode

## βœ” Docker can pull images

## βœ” Docker can run image

## βœ” Docker can mount temporary volumes

## βœ” Docker test successful -----------------------------------------------------------------

## [1] TRUE

Same for singularity:

dynwrap::test_singularity_installation(detailed = TRUE)
## βœ” Singularity is installed

## βœ” Singularity is at correct version (>=3.0): v3.0.0-13-g0273e90f is installed

## βœ” Singularity can pull and run a container from Dockerhub

## βœ” Singularity can mount temporary volumes

## βœ” Singularity test successful ------------------------------------------------------------

## [1] TRUE

These commands will give helpful tips if some parts of the installation are missing.

You can’t perform that action at this time.