Skip to content

Latest commit

 

History

History
64 lines (48 loc) · 1.89 KB

README.md

File metadata and controls

64 lines (48 loc) · 1.89 KB

Snakemake Workflow for Chandra Data Reduction

This is an example snakemake workflow for data reduction of Chandra data. The workflow will run the standrad ciao tools for a given configuration and produce as output FITS files. Thereby it will handle the reduction of counts, exposure and point spread function (PSF).

Getting Started

Use as Snakemake Module (recommended)

If you would like use this as Snakmake module you should add e.g. the following to your Snakefile:

module chandra_data_workflow:
    snakefile:
        # here, plain paths, URLs and the special markers for code hosting providers (see below) are possible.
        github("adonath/snakemake-workflow-chandra", path="workflow/Snakefile", branch="main")
    config: config["chandra-data"]

use rule * from chandra_data_workflow as chandra_data_*

See also https://snakemake.readthedocs.io/en/stable/snakefiles/modularization.html#modularization

Setup Environment

Start by cloning this repository to your local machine:

git clone https://github.com/adonath/snakemake-workflow-chandra.git

If you havn't done yet, please install conda or mamba.

Now change to the directory of the repository:

cd snakemake-workflow-chandra/

And create the conda environment using:

mamba env create -f environment.yaml

Once the process is done you can activate the environment:

conda activate snakemake-workflow-chandra

Configure and Run the Workflow

Now you should adapt the configuration in config/config.yaml to match your data.

Then you are ready to run the workflow, like:

snakemake --cores 8

You can also create a report to see previews of the counts, exposure and PSF images:

snakemake --report report.html
open report.html