Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,6 @@
*/*/.ipynb_checkpoints
outputs/*
.DS_Store
*/_build/*
*/*.ipynb
jupyterbook/outputs
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This tutorial will walk you through the main concepts of Pydra!

You can run the notebooks locally or run using [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/nipype/pydra-tutorial/master?filepath=notebooks)
You can run the notebooks locally or run using [![Binder](https://mybinder.org/v2/gh/nipype/pydra-tutorial/master)

If you are running locally, be sure to install the necessary [requirements.](https://github.com/nipype/pydra-tutorial/blob/master/requirements.txt)

Expand Down
38 changes: 38 additions & 0 deletions jupyterbook/_config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Book settings
# Learn more at https://jupyterbook.org/customize/config.html

title: Pydra Tutorial
author: Pydra Developers
logo: logo.jpg

# Force re-execution of notebooks on each build.
# See https://jupyterbook.org/content/execute.html
execute:
execute_notebooks: cache
run_in_temp: true
allow_errors: true
timeout: -1

# Define the name of the latex output file for PDF builds
latex:
latex_documents:
targetname: book.tex

# Add a bibtex file so that we can create citations
bibtex_bibfiles:
- references.bib

# Information about where the book exists on the web
repository:
url: https://github.com/nipype/pydra-tutorial # Online location of your book
path_to_book: docs # Optional path to your book, relative to the repository root
branch: master # Which branch of the repository should be used when creating links (optional)

# Add GitHub buttons to your book
# See https://jupyterbook.org/customize/config.html#add-a-link-to-your-repository
html:
use_issues_button: true
use_repository_button: true

launch_buttons:
binderhub_url: "https://mybinder.org/v2/gh/nipype/pydra-tutorial/master" # The URL for your BinderHub (e.g., https://mybinder.org)
18 changes: 18 additions & 0 deletions jupyterbook/_toc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Table of contents
# Learn more at https://jupyterbook.org/customize/toc.html

format: jb-book
root: welcome
parts:
- caption: Tutorials
chapters:
- file: notebooks/1_intro_pydra
- file: notebooks/2_intro_functiontask
- file: notebooks/3_intro_functiontask_state
- file: notebooks/4_intro_workflow
- file: notebooks/5_intro_shelltask
- file: notebooks/6_glm_from_nilearn
# - caption: About Pydra
# chapters:
# - file: about/team
# - file: about/cite_pydra
3 changes: 3 additions & 0 deletions jupyterbook/about/cite_pydra.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Cite Pydra

TODO
3 changes: 3 additions & 0 deletions jupyterbook/about/team.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Team

TODO
1 change: 1 addition & 0 deletions jupyterbook/figures
Binary file added jupyterbook/logo.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions jupyterbook/notebooks
56 changes: 56 additions & 0 deletions jupyterbook/references.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
---

@inproceedings{holdgraf_evidence_2014,
address = {Brisbane, Australia, Australia},
title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}},
booktitle = {International {Conference} on {Cognitive} {Neuroscience}},
publisher = {Frontiers in Neuroscience},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.},
year = {2014}
}

@article{holdgraf_rapid_2016,
title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility},
volume = {7},
issn = {2041-1723},
url = {http://www.nature.com/doifinder/10.1038/ncomms13654},
doi = {10.1038/ncomms13654},
number = {May},
journal = {Nature Communications},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.},
year = {2016},
pages = {13654},
file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf}
}

@inproceedings{holdgraf_portable_2017,
title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science},
volume = {Part F1287},
isbn = {978-1-4503-5272-7},
doi = {10.1145/3093338.3093370},
abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.},
booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}},
author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.},
year = {2017},
keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy}
}

@article{holdgraf_encoding_2017,
title = {Encoding and decoding models in cognitive electrophysiology},
volume = {11},
issn = {16625137},
doi = {10.3389/fnsys.2017.00061},
abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.},
journal = {Frontiers in Systems Neuroscience},
author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.},
year = {2017},
keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials}
}

@book{ruby,
title = {The Ruby Programming Language},
author = {Flanagan, David and Matsumoto, Yukihiro},
year = {2008},
publisher = {O'Reilly Media}
}
15 changes: 15 additions & 0 deletions jupyterbook/welcome.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Welcome

This book will walk you through the main concepts of Pydra and provide hands-on experience!

It covers six topics: Pydra philosophy, FunctionTask, task states, Workflow, ShellCommandTask, and the first level analysis of BIDS data.

You can go through each topic by following this book, then play with it using [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/nipype/pydra-tutorial/master) or locally.

If you are running locally, be sure to install the necessary [requirements.](https://github.com/nipype/pydra-tutorial/blob/master/requirements.txt)


Check out each tutorial to see more.

```{tableofcontents}
```
95 changes: 0 additions & 95 deletions notebooks/1_intro_pydra.ipynb

This file was deleted.

61 changes: 61 additions & 0 deletions notebooks/1_intro_pydra.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
jupytext:
formats: ipynb,md:myst
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.13.8
kernelspec:
display_name: Python 3
language: python
name: python3
---

# 1. Pydra

+++

Pydra is a lightweight, Python 3.7+ dataflow engine for computational graph construction, manipulation, and distributed execution.
Designed as a general-purpose engine to support analytics in any scientific domain; created for [Nipype](https://github.com/nipy/nipype), and helps build reproducible, scalable, reusable, and fully automated, provenance tracked scientific workflows.
The power of Pydra lies in ease of workflow creation
and execution for complex multiparameter map-reduce operations, and the use of global cache.

Pydra's key features are:
- Consistent API for Task and Workflow
- Splitting & combining semantics on Task/Workflow level
- Global cache support to reduce recomputation
- Support for execution of Tasks in containerized environments

+++

## Pydra computational objects - Tasks
There are two main types of objects in *pydra*: `Task` and `Workflow`, that is also a type of `Task`, and can be used in a nested workflow.
![nested_workflow.png](../figures/nested_workflow.png)



**These are the current `Task` implemented in Pydra:**
- `Workflow`: connects multiple `Task`s withing a graph
- `FunctionTask`: wrapper for Python functions
- `ShellCommandTask`: wrapper for shell commands
- `ContainerTask`: wrapper for shell commands run within containers
- `DockerTask`: `ContainerTask` that uses Docker
- `SingularityTask`: `ContainerTask` that uses Singularity


+++

## Pydra Workers
Pydra supports multiple workers to execute `Tasks` and `Workflows`:
- `ConcurrentFutures`
- `SLURM`
- `Dask` (experimental)

+++

**Before going to next notebooks, let's check if pydra is properly installed**

```{code-cell} ipython3
import pydra
```
Loading