Skip to content

Commit

Permalink
reworking iteration doc (#186)
Browse files Browse the repository at this point in the history
  • Loading branch information
denisri committed Jun 9, 2021
1 parent 8a4bd2f commit c4446c8
Show file tree
Hide file tree
Showing 7 changed files with 176 additions and 0 deletions.
176 changes: 176 additions & 0 deletions docs/source/documentation/pipeline_manager.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ The Pipeline Manager tab is composed of four main elements:
* :ref:`process-library-label`
* :ref:`pipeline-editor-label`
* :ref:`node-controller-label`
* :ref:`pipeline-iteration-label`
* :ref:`iteration-table-label`

Here is an overview of the Pipeline Manager tab:
Expand Down Expand Up @@ -221,6 +222,181 @@ This part is based on the Double_smooth.py file created in :ref:`pipeline-editor
* At the end of the run, 'Pipeline "Double_smooth.py" been correctly run' should be run in the bottom left's status bar of the software.


.. _pipeline-iteration-label:

Pipeline iteration
------------------

A pipeline is generally designed to perform a series of tasks on a single data, or on a set of data going together (an anatomical MRI image and a series of fMRI data for instance). To process databases we need to iterate pipelines on many input data. For this we use "iterations" which virtually duplicate the given pipeline as many times as there are data to process. Populse is able to process such iterations in parallel as long as they are independent. This is done by transforming a pipeline (or a process) into an iterative pipeline, using an iteration node.

The pipeline manager simplifies this operation using the "Iterate pipeline" button and :ref:`iteration-table-loabel`.

There are two ways to iterate a pipeline (or a process): one is by cerating a regular iterative pipeline (with "direct" inputs), the other is using input filters linked to the database.

Direct iteration
^^^^^^^^^^^^^^^^

Starting with a new, empty pipeline tab in the Pipeline Manager:

* Add the pipeline mia_processes > pipelines > preprocess > spatial Preprocessing 1 to the pipeline editor

* The ``func_files`` parameter is a list. We will use only one item per iteration. In order to disambiguate the "list of list" situation in the iteration, we will use here a node which transforms a single file (input for this iteration) into a list of just one element.

* Use the "capsul > pipeline > custom_nodes > reduce_node > ReduceNode" brick.
* validate the default node parameters. The reduce node appears in the pipeline editor.

* connect the ``outputs`` plug of the reduce node to the ``func_files`` plug of the spatial_preprocessing node

* export the ``input_0`` plug of the reduce node, renamed as ``func_files`` (for clarity)

* export all unconnected plugs of the spatial preprocessing node (right click on the node then select "export all unconnected plugs")

.. image:: ../images/pipeline_iter_1.jpg
:align: center
:name: Pipeline iteration 1

* check on the "iterate pipeline" button.

A dialog pops up and displays all the pipeline parameters. The user can choose which ones will be iterated (by default, all). It if's OK, then just click "OK".

.. image:: ../images/pipeline_iter_2.jpg
:align: center
:name: Pipeline iteration 2

The pipeline (or process) will now be changed into an iterative pipeline, with an iterative node. The former pipeline is now inside the iterative node.

.. image:: ../images/pipeline_iter_3.jpg
:align: center
:name: Pipeline iteration 3

* select the ``inputs`` node

* click the "Filter" button for the ``anat files`` parameter, and select the files (anatomical MRIs) you wish to extract the brain from.

.. image:: ../images/pipeline_iter_4.jpg
:align: center
:name: Pipeline iteration 4

* similarly, click on the "Filter" button for the ``func_files`` parameter and select the same number of functional files.

* TODO: check that anats and corresponding functional files are in the same order... The database filters do not ensure that and do not allow to specify any order...

* click on "Initialize pipeline". The Run button becomes enabled.

* click on "Run pipeline".


Via input filters
^^^^^^^^^^^^^^^^^

Quickly
#######

Starting with a new, empty pipeline tab in the Pipeline Manager:

* Add the pipeline mia_processes > pipelines > preprocess > spatial Preprocessing 1 to the pipeline editor

* check on the "iterate pipeline" button.

A dialog pops up and displays all the pipeline parameters. In addition to the previous example, also check the second button (for "database") on each input parameter.

.. image:: ../images/pipeline_iter_5.jpg
:align: center
:name: Pipeline iteration 5

The pipeline (or process) will now be changed into an iterative pipeline, with an iterative node, and two "input filter" nodes. The former pipeline is now inside the iterative node.

.. image:: ../images/pipeline_iter_6.jpg
:align: center
:name: Pipeline iteration 6

* right-click on the ``anat_file_filter`` node, and select "Open filter". In the filter pop-up, modify the filter to apply to select anatomical files.

* similarly, right-click on the "func_files_filter" node, and select "Open filter". In the filter pop-up, modify the filter to apply to select functional files.

* TODO: check that anats and corresponding functional files are in the same order... The database filters do not ensure that and do not allow to specify any order...

* click on "Initialize pipeline". The Run button becomes enabled.

* click on "Run pipeline".


Manually
########

To be able to iterate the pipeline correctly, you will need a filter process that is ready to use in the mia_processes package, since the 1.1.1 release (Input_filter brick). The mia_processes package is available from the `Cheese Shop`_.


* Add the previously saved pipeline in a new editor by dragging it from the process library (under User_processes) and dropping it to the pipeline editor.

.. image:: ../images/pipeline_example_4.png
:align: center
:name: Pipeline example 4

|
* Add Input_filter processes in front of each input that comes from the database (Anat_file, FLASH_file and MDEFT_file)
* Note: a file_to_list process is added before the Anat_file plug because Input_filter's output is of type List, and Anat_file plug is of type File.

.. image:: ../images/pipeline_example_5.png
:align: center
:name: Pipeline example 5

|
* For each Input_filter process, right-click on it and select "Open filter". In the filter pop-up, modify the filter to apply.
* For exemple, for the Anat_file plug. Set : "Exp Type == Anat" in the Advanced Search.

.. image:: ../images/pipeline_example_6.png
:align: center
:name: Pipeline example 6

|
* Right-click on one of the Input_filter processes and select "Export to database_scans".

* Add links between "database_scans" and the input of the other Input_filter processes.

.. image:: ../images/pipeline_example_7.png
:align: center
:name: Pipeline example 7

|
* Export the other node plugs by right-clicking on "realign_coreg_smooth1" node and selecting "Export all unconnected plugs".
* The pipeline becomes enabled.

.. image:: ../images/pipeline_example_8.png
:align: center
:name: Pipeline example 8

|
* Save the pipeline in the proposed folder by clicking on the bottom left "Pipeline" menu.

* Click on the "inputs" or "outputs" node and modify the parameters in the node controller

.. image:: ../images/pipeline_example_9.png
:align: center
:name: Pipeline example 9

|
* Save the set of parameters that you have just modified by clicking on "Save pipeline parameters" in the bottom left "Pipeline" menu.
* This step is not mandatory. But by saving these parameters, the next time you open the pipeline, you will be able load them directly by clicking on "Load pipeline parameters" in the bottom left "Pipeline" menu.

* Set up the iteration table.
* Check the "Iterate pipeline" check box and select to iterate over the "Patient" tag.
* By changing the value of the selected Patient, you change the list of documents in "database_scans"

.. image:: ../images/pipeline_example_10.png
:align: center
:name: Pipeline example 10





.. _iteration-table-label:

The iteration table
Expand Down
Binary file added docs/source/images/pipeline_iter_1.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/images/pipeline_iter_2.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/images/pipeline_iter_3.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/images/pipeline_iter_4.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/images/pipeline_iter_5.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/images/pipeline_iter_6.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit c4446c8

Please sign in to comment.