Skip to content

Commit

Permalink
Merge pull request #101 from nf-core/dev
Browse files Browse the repository at this point in the history
v1.1 release merge
  • Loading branch information
ewels committed Oct 5, 2018
2 parents 44f1525 + 6206c9d commit 1cd5ab7
Show file tree
Hide file tree
Showing 18 changed files with 94 additions and 148 deletions.
4 changes: 2 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ before_install:
# Pull the docker image first so the test doesn't wait for this
- docker pull nfcore/rnaseq
# Fake the tag locally so that the pipeline runs properly
- docker tag nfcore/rnaseq nfcore/rnaseq:1.0
- docker tag nfcore/rnaseq nfcore/rnaseq:1.1

install:
# Install Nextflow
Expand All @@ -26,7 +26,7 @@ install:
- mkdir ${TRAVIS_BUILD_DIR}/tests && cd ${TRAVIS_BUILD_DIR}/tests

env:
- NXF_VER='0.31.1' # Specify a minimum NF version that should be tested and work
- NXF_VER='0.32.0' # Specify a minimum NF version that should be tested and work
- NXF_VER='' # Plus: get the latest NF version and check that it works

script:
Expand Down
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# nf-core/rnaseq

## [Version 1.1](https://github.com/nf-core/rnaseq/releases/tag/1.1) - 2018-10-05

#### Pipeline updates
* Wrote docs and made minor tweaks to the `--skip_qc` and associated options
* Removed the depreciated `uppmax-modules` config profile
* Updated the `hebbe` config profile to use the new `withName` syntax too
* Use new `workflow.manifest` variables in the pipeline script
* Updated minimum nextflow version to `0.32.0`

#### Bug Fixes
* [#77](https://github.com/nf-core/rnaseq/issues/77): Added back `executor = 'local'` for the `workflow_summary_mqc`
* [#95](https://github.com/nf-core/rnaseq/issues/95): Check if task.memory is false instead of null
* [#97](https://github.com/nf-core/rnaseq/issues/97): Resolved edge-case where numeric sample IDs are parsed as numbers causing some samples to be incorrectly overwritten.


## [Version 1.0](https://github.com/nf-core/rnaseq/releases/tag/1.0) - 2018-08-20

This release marks the point where the pipeline was moved from [SciLifeLab/NGI-RNAseq](https://github.com/SciLifeLab/NGI-RNAseq)
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ LABEL authors="phil.ewels@scilifelab.se" \

COPY environment.yml /
RUN conda env create -f /environment.yml && conda clean -a
ENV PATH /opt/conda/envs/nf-core-rnaseq-1.0/bin:$PATH
ENV PATH /opt/conda/envs/nf-core-rnaseq-1.1/bin:$PATH
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# ![nfcore/rnaseq](docs/images/nfcore-rnaseq_logo.png)

[![Build Status](https://travis-ci.org/nf-core/rnaseq.svg?branch=master)](https://travis-ci.org/nf-core/rnaseq)
[![Nextflow](https://img.shields.io/badge/nextflow-%E2%89%A50.31.1-brightgreen.svg)](https://www.nextflow.io/)
[![Nextflow](https://img.shields.io/badge/nextflow-%E2%89%A50.32.0-brightgreen.svg)](https://www.nextflow.io/)
[![DOI](https://zenodo.org/badge/127293091.svg)](https://zenodo.org/badge/latestdoi/127293091)
[![Gitter](https://img.shields.io/badge/gitter-%20join%20chat%20%E2%86%92-4fb99a.svg)](https://gitter.im/nf-core/Lobby)

[![install with bioconda](https://img.shields.io/badge/install%20with-bioconda-brightgreen.svg)](http://bioconda.github.io/)
Expand Down
4 changes: 2 additions & 2 deletions Singularity
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ Bootstrap:docker
%labels
MAINTAINER Phil Ewels <phil.ewels@scilifelab.se>
DESCRIPTION Singularity image containing all requirements for the nf-core/rnaseq pipeline
VERSION 1.0
VERSION 1.1

%environment
PATH=/opt/conda/envs/nf-core-rnaseq-1.0/bin:$PATH
PATH=/opt/conda/envs/nf-core-rnaseq-1.1/bin:$PATH
export PATH

%files
Expand Down
3 changes: 1 addition & 2 deletions bin/mqc_features_stat.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def mqc_feature_stat(bfile, features, outfile, sname=None):
return

# Prepare the output strings
out_head, out_value, out_mqc = ("Sample", sname, mqc_main)
out_head, out_value, out_mqc = ("Sample", "'{}'".format(sname), mqc_main)
for ft, pt in fpercent.items():
out_head = "{}\tpercent_{}".format(out_head, ft)
out_value = "{}\t{}".format(out_value, pt)
Expand All @@ -72,4 +72,3 @@ def mqc_feature_stat(bfile, features, outfile, sname=None):
parser.add_argument("-o", "--output", dest='output', default='biocount_percent.tsv', type=str, help="Sample Name")
args = parser.parse_args()
mqc_feature_stat(args.biocount, args.features, args.output, args.sample)

1 change: 1 addition & 0 deletions conf/base.config
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ process {
withName:workflow_summary_mqc {
memory = { check_max( 2.GB, 'memory' ) }
cache = false
executor = 'local'
errorStrategy = 'ignore'
}
}
Expand Down
8 changes: 4 additions & 4 deletions conf/binac.config
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@
*/

singularity {
enabled = true
enabled = true
}

process {
beforeScript = 'module load devel/singularity/2.4.1'
executor = 'pbs'
queue = 'short'
beforeScript = 'module load devel/singularity/2.4.1'
executor = 'pbs'
queue = 'short'
}

params {
Expand Down
6 changes: 3 additions & 3 deletions conf/cfc.config
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,16 @@
*/

singularity {
enabled = true
enabled = true
}

/*
*To be improved by process specific configuration asap, once our CFC cluster has the extra options removed - till then, task.attempt in NextFlow is not supported there.
*/

process {
beforeScript = 'module load qbic/singularity_slurm/2.5.2'
executor = 'slurm'
beforeScript = 'module load qbic/singularity_slurm/2.5.2'
executor = 'slurm'
}

params {
Expand Down
20 changes: 1 addition & 19 deletions conf/hebbe.config
Original file line number Diff line number Diff line change
Expand Up @@ -15,25 +15,7 @@ process {
clusterOptions = { "-A $params.project ${params.clusterOptions ?: ''}" }

/* The Hebbe scheduler fails if you try to request an amount of memory for a job */
memory = null
$makeSTARindex.memory = null
$makeHisatSplicesites.memory = null
$makeHISATindex.memory = null
$fastqc.memory = null
$trim_galore.memory = null
$star.memory = null
$hisat2Align.memory = null
$hisat2_sortOutput.memory = null
$rseqc.memory = null
$genebody_coverage.memory = null
$preseq.memory = null
$markDuplicates.memory = null
$dupradar.memory = null
$featureCounts.memory = null
$merge_featureCounts.memory = null
$stringtieFPKM.memory = null
$sample_correlation.memory = null
$multiqc.memory = null
withName: '*' { memory = null }
}

params {
Expand Down
63 changes: 0 additions & 63 deletions conf/uppmax-modules.config

This file was deleted.

8 changes: 4 additions & 4 deletions docs/configuration/adding_your_own.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ nextflow run nf-core/rnaseq -profile docker --reads '<path to your reads>' --fas

Nextflow will recognise `nf-core/rnaseq` and download the pipeline from GitHub. The `-profile docker` configuration lists the [nfcore/rnaseq](https://hub.docker.com/r/nfcore/rnaseq/) image that we have created and is hosted at dockerhub, and this is downloaded.

The public docker images are tagged with the same version numbers as the code, which you can use to ensure reproducibility. When running the pipeline, specify the pipeline version with `-r`, for example `-r v1.4`. This uses pipeline code and docker image from this tagged version.
The public docker images are tagged with the same version numbers as the code, which you can use to ensure reproducibility. When running the pipeline, specify the pipeline version with `-r`, for example `-r 1.0`. This uses pipeline code and docker image from this tagged version.

To add docker support to your own config file (instead of using the `docker` profile, which runs locally), add the following:

Expand Down Expand Up @@ -91,17 +91,17 @@ If you intend to run the pipeline offline, nextflow will not be able to automati
First, pull the image file where you have an internet connection:

> NB: The "tag" at the end of this command corresponds to the pipeline version.
> Here, we're pulling the docker image for version 1.4 of the nfcore/rnaseq pipeline
> Here, we're pulling the docker image for version 1.0 of the nfcore/rnaseq pipeline
> Make sure that this tag corresponds to the version of the pipeline that you're using
```bash
singularity pull --name nfcore-rnaseq-1.4.img docker://nfcore/rnaseq:1.4
singularity pull --name nfcore-rnaseq-1.0.img docker://nfcore/rnaseq:1.0
```

Then transfer this file and run the pipeline with this path:

```bash
nextflow run /path/to/nfcore-rnaseq -with-singularity /path/to/nfcore-rnaseq-1.4.img
nextflow run /path/to/nfcore-rnaseq -with-singularity /path/to/nfcore-rnaseq-1.0.img
```

### Bioconda
Expand Down
8 changes: 4 additions & 4 deletions docs/configuration/local.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Nextflow will recognise `nf-core/rnaseq` and download the pipeline from GitHub.
For more information about how to work with reference genomes, see [`docs/configuration/reference_genomes.md`](docs/configuration/reference_genomes.md).

### Pipeline versions
The public docker images are tagged with the same version numbers as the code, which you can use to ensure reproducibility. When running the pipeline, specify the pipeline version with `-r`, for example `-r v1.4`. This uses pipeline code and docker image from this tagged version.
The public docker images are tagged with the same version numbers as the code, which you can use to ensure reproducibility. When running the pipeline, specify the pipeline version with `-r`, for example `-r 1.0`. This uses pipeline code and docker image from this tagged version.


## Singularity image
Expand All @@ -32,15 +32,15 @@ If you intend to run the pipeline offline, nextflow will not be able to automati
First, pull the image file where you have an internet connection:

> NB: The "tag" at the end of this command corresponds to the pipeline version.
> Here, we're pulling the docker image for version 1.4 of the nfcore/rnaseq pipeline
> Here, we're pulling the docker image for version 1.0 of the nfcore/rnaseq pipeline
> Make sure that this tag corresponds to the version of the pipeline that you're using
```bash
singularity pull --name nfcore-rnaseq-1.4.img docker://nfcore/rnaseq:1.4
singularity pull --name nfcore-rnaseq-1.0.img docker://nfcore/rnaseq:1.0
```

Then transfer this file and run the pipeline with this path:

```bash
nextflow run /path/to/nfcore-rnaseq -with-singularity /path/to/nfcore-rnaseq-1.4.img
nextflow run /path/to/nfcore-rnaseq -with-singularity /path/to/nfcore-rnaseq-1.0.img
```
14 changes: 7 additions & 7 deletions docs/configuration/uppmax.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ First, to generate the singularity image, run the following command. Note that y
First, pull the image file where you have an internet connection:

> NB: The "tag" at the end of this command corresponds to the pipeline version.
> Here, we're pulling the docker image for version 1.4 of the nfcore/rnaseq pipeline
> Here, we're pulling the docker image for version 1.0 of the nfcore/rnaseq pipeline
> Make sure that this tag corresponds to the version of the pipeline that you're using
```bash
singularity pull --name nfcore-rnaseq-1.4.img docker://nfcore/rnaseq:1.4
singularity pull --name nfcore-rnaseq-1.0.img docker://nfcore/rnaseq:1.0
pwd # Prints path to your singularity container
```

Expand All @@ -35,9 +35,9 @@ or `.tar.gz` file). Once transferred, extract the pipeline files.
For example, with a `.zip` file:

```bash
unzip 1.4.zip
mv rnaseq-1.4 nfcore-rnaseq # rename the folder
cd nfcore-rnaseq
unzip 1.0.zip
mv nfcore-rnaseq-1.0 nfcore-rnaseq # rename the folder
cd nfcore-rnaseq-1.0
pwd # Prints full path to your pipeline
```

Expand All @@ -46,12 +46,12 @@ and execute Nextflow with the path to the pipeline, as so:

```bash
cd /path/to/my/data/analysis
nextflow run /path/to/nfcore-rnaseq -with-singularity /path/to/singularity/nfcore-rnaseq-1.4.img
nextflow run /path/to/nfcore-rnaseq-1.0 -with-singularity /path/to/singularity/nfcore-rnaseq-1.0.img
```

(Note that you'll need the other common flags such as `--reads` and `--genome` in addition to this command).

> NB: Note that you should _not_ use the `-r 1.4` flag recommended elsewhere. This tells Nextflow to download
> NB: Note that you should _not_ use the `-r 1.0` flag recommended elsewhere. This tells Nextflow to download
> that version of the code when it runs. Here, you have already downloaded the code, so it generates an error.

Expand Down
23 changes: 17 additions & 6 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ nextflow pull nf-core/rnaseq
### Reproducibility
It's a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the [nfcore/rnaseq releases page](https://github.com/nf-core/rnaseq/releases) and find the latest version number - numeric only (eg. `1.4`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.4`.
First, go to the [nfcore/rnaseq releases page](https://github.com/nf-core/rnaseq/releases) and find the latest version number - numeric only (eg. `1.0`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.0`.

This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future.

Expand Down Expand Up @@ -110,8 +110,6 @@ params {
}
```

> **NB:** Before v1.4 of the pipeline, the UPPMAX profile ran in reverse stranded mode by default. This was removed in the v1.4 release, so all profiles now run in unstranded mode by default.
If you have a default strandedness set in your personal config file you can use `--unstranded` to overwrite it for a given run.

These flags affect the commands used for several steps in the pipeline - namely HISAT2, featureCounts, RSeQC (`RPKM_saturation.py`) and StringTie:
Expand Down Expand Up @@ -237,6 +235,19 @@ Sets trimming and standedness settings for the _SMARTer Stranded Total RNA-Seq K
Equivalent to: `--forward_stranded` `--clip_r1 3` `--three_prime_clip_r2 3`


## Skipping QC steps
The pipeline contains a large number of quality control steps. Sometimes, it may not be desirable to run all of them if time and compute resources are limited.
The following options make this easy:

* `--skip_qc` - Skip **all QC steps**, apart from MultiQC
* `--skip_fastqc` - Skip FastQC
* `--skip_rseqc` - Skip RSeQC
* `--skip_genebody_coverage` - Skip calculating the genebody coverage
* `--skip_preseq` - Skip Preseq
* `--skip_dupradar` - Skip dupRadar (and Picard MarkDups)
* `--skip_edger` - Skip edgeR MDS plot and heatmap
* `--skip_multiqc` - Skip MultiQC

## Job Resources
### Automatic resubmission
Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of `143` (exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.
Expand All @@ -249,11 +260,11 @@ Running the pipeline on AWS Batch requires a couple of specific parameters to be
### `--awsqueue`
The JobQueue that you intend to use on AWS Batch.
### `--awsregion`
The AWS region to run your job in. Default is set to `eu-west-1` but can be adjusted to your needs.
The AWS region to run your job in. Default is set to `eu-west-1` but can be adjusted to your needs.

Please make sure to also set the `-w/--work-dir` and `--outdir` parameters to a S3 storage bucket of your choice - you'll get an error message notifying you if you didn't.
Please make sure to also set the `-w/--work-dir` and `--outdir` parameters to a S3 storage bucket of your choice - you'll get an error message notifying you if you didn't.

###
###
## Other command line parameters
### `--outdir`
The output directory where the results will be saved.
Expand Down
Loading

0 comments on commit 1cd5ab7

Please sign in to comment.