Skip to content

Cortical Thickness & Surface Analysis with CIVET

Gabriel A. Devenyi edited this page Sep 26, 2023 · 100 revisions

CIVET is software for computing cortical surfaces and thickness maps from T1 images.

Depending if you are using CIC or Scinet, follow the instructions.

Niagara

For Niagara, log in:

ssh username@niagara.computecanada.ca

Loading modules from the quarantine

CIC

CIVET requires certain software packages in order to run. These packages are loaded into modules. When you first access the compute node:

> module list
No modules loaded

In order to run CIVET, you will need to load some it's module using the following command:

> module load CIVET/2.1.1
> module list
Currently Loaded Modules:
  1) CIVET/2.1.1

Niagara

module load cobralab/2019b

Setting up your folders and files (same for CIC & Scinet)

CIVET has a required folder structure and naming convention for your input files. Make a folder in which you will run CIVET. cd into your project folder and make input and output folders.

> mkdir civet_example
> cd civet_example
> mkdir input
> mkdir output

Your input folder is where you will place all your images. Populate your input folder with the images you wish to process. Your input images should be non-extracted, so if you are using bpipe outputs, you should use *cutneckapplyautocrop.mnc scans and not *cutneckapplyautocrop.beastextract.mnc.

It is also strongly recommended that you add a mask file for each subject to the same input folder to disable the masking feature inside CIVET. Experimentation has determined that masks generated from BeAST (say from inside minc-bpipe-library) are superior to those produced internally in CIVET. In this case the input masks must be *.cutneckapplyautocrop.beastmask.mnc and the files must be named ID_mask.mnc

The following command only applies to individuals taking the CIC CIVET course. These files are no longer available. In your case, you are responsible for populating your input folder on your own.

> cp -s /data/scratch/civet-course/complete/images/*mnc input/

For now, your output folder is empty.

CIVET requires that your files be named according to the following convention: ID_t1.mnc. ID is an identifier, and should be unique for each input image, and should be the same as the subject ID used elsewhere in your study. When performing your statistical analysis, you will find it very useful if the ID of each image is also found identical to the ID column of your demographic spreadsheet. The simplest method here is to just name your file with your ID, and then rename it with:

> ls input/
002_S_0559.mnc

> for file in *mnc; do mv $file $(basename $file .mnc)_t1.mnc; done
> ls input/
002_S_0559_t1.mnc

Some helpful bash commands:

Truncate filename to a certain number of characters (before filename extension). Example: PING_P1693_389572_34712.mnc becomes PING_P1693.mnc, where {10} will preserve the first 10 characters.

rename 's/^(.{10}).*(\..*)$/$1$2/' *

Add characters to the end of the filename before the extension. Example: PING_P1693.mnc becomes PING_P1693_t1.mnc.
In the CIC, this can be done using the Perl version of the rename function:

rename 's/\.mnc$/_t1.mnc/' *mnc

In Niagara:

rename .mnc _t1.mnc *mnc

Setting Options

If you wish to run CIVET with options different from the default settings, you must specify them. Please note that even if you are using the default settings, you should know what options you are putting into the pipeline, as well as have some idea of how they affect output.

The recommended default CIVET command for inputs which have been preprocessed by minc-bpipe-library is:

> CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mean-curvature \
  -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -surface-atlas AAL -interp sinc -template 0.50 -thickness tlaplace 30:0 -area-fwhm 40:0 -volume-fwhm 40:0 -granular -run <list of IDS to run>

A quick overview of the options:

  • -lsq12 perform a full affine alignment to the template model
  • -mean-curvature produce mean curvature maps
  • -mask-hippocampus exclude the hippocampus as part of cortex
  • -resample-surfaces produce surface resampled outputs for final statistical analysis
  • -N3-distance 0 disable N3 correction since input files are already preprocessed
  • -correct-pve do iterative correction of partial volume estimates for wm/gm/csf
  • -surface-atlas AAL provide aggregate measures in AAL parcellation space (alternatively, use DKT)
  • -interp sinc -template 0.50 upsample brains in 0.5 mm MNI space during registration using high-quality sinc interpolation (reduces partial voluming)
  • -thickness tlaplace 30:0 use laplace method for thickness estimates, provide 30 mm (recommended default blur) and 0 mm (for aggregate measures) for surface blurring outputs
  • -granular -run breakup jobs into smallest pieces, run pipeline
  • <list of IDS to run> here, we provide a list of ids, easy way to generate them $(for file in input/*t1.mnc; do echo -n "$(basename $file _t1.mnc) "; done)

An overview of all available options can be found under the Basic Usage tab of the CIVET website: http://www.bic.mni.mcgill.ca/ServicesSoftware/CIVET

Using more than one surface atlas

CIVET is smart enough to not redo work, which means you can re-run the CIVET command with different -surface-atlas options to generate the other surface parcellations.

Example:

CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mean-curvature \
  -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -surface-atlas AAL -interp sinc -template 0.50 -thickness tlaplace 30:0 -area-fwhm 40:0 -volume-fwhm 40:0 -granular -run subject1 &&
CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mean-curvature \
  -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -surface-atlas DKT -interp sinc -template 0.50 -thickness tlaplace 30:0 -area-fwhm 40:0 -volume-fwhm 40:0 -granular -run subject1 &&
CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mean-curvature \
  -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -surface-atlas lobes -interp sinc -template 0.50 -thickness tlaplace 30:0 -area-fwhm 40:0 -volume-fwhm 40:0 -granular -run subject1

Will run the full CIVET pipeline for the first command, then run it again with a different surface option twice, to generate the extra parcellations. If submitting a job like this these extra parcellations take essentially zero time, so no need to adjust the job submission walltime.

Submitting jobs to the compute nodes

CIC

In the case of the CIC, CIVET integrates directly into the scheduler and so all one needs to do is run the CIVET command above.

Niagara

For Niagara, CIVET does not integrate into the scheduler, so we need to generate a joblist suitable for qbatch to submit to the cluster. If we take the run command example from above, we make one small modification to generate one CIVET command per ID instead of running one CIVET command over all ids:

> for file in input/*t1.mnc; do
echo CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mean-curvature \
  -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -surface-atlas AAL -interp sinc -template 0.50 -thickness tlaplace 30:0 -area-fwhm 40:0 -volume-fwhm 40:0 -granular -run $(basename $file _t1.mnc)
done > joblist

Now, we submit, the joblist to Niagara to process

> qbatch -N CIVET -c 8 -w 12:00:00 joblist

Monitoring Jobs (CIC & Scinet)

After submitting your jobs, you can use the qstat command to monitor their status. When you qstat, if there is no output it means that your jobs have finished (meaning they ran and finished or they did not run at all and finished quickly). cd into your output folder and use the ls command to see what is there. You should expect to find one folder for each of your inputs. The folders will be named according to the ID of your subjects.

Use the following command to search through the log files for any failures:

> grep -i fail logs/*

If this returns nothing, then the pipeline ran to completion for each of your inputs.

Troubleshooting poorer quality data

It is inevitable for errors to arise when preprocessing poorer quality scans, especially with sensitive pipelines such as CIVET. This section outlines some troubleshooting strategies that may help increase the proportion of scans that pass through successfully. Please note that many of the suggestions provided here are drawn from the McConnell Brain Imaging Centre site.

After the run is complete, it is useful to review the “logs/” directory of each participant, which contains an execution log and status file for all stages of the pipeline. Files that did not complete a particular stage will contain a “failed” status file (as opposed to “finished”). Reading the related execution log will provide more information about the root cause of the failed status. It may also be useful to visualize the input and processed brain scans in Display to identify the location and nature of the error. For example, if white matter surface extraction failed, the t1_tal.mnc file in the “final/” folder and the .obj surfaces file in the “surfaces/” folder can be viewed. Note that the errors may not always be visually apparent.

  1. A common error relates to failed interpolation of the marching-cubes surface causing self-intersections and errors in surface extraction. In this case, it is worth reviewing the scans to check for the presence of blood vessels which can appear as white spots in the T1-w scans, abnormal colouring, or surface-surface intersections. For visual examples, please check BloodVesselFailure. Then, CIVET can be rerun with the “-mask-blood-vessels” command.

  2. The “-interp sinc -template 0.5” in the syntax upsamples brains during registration using the sinc interpolation method to reduce partial voluming effects. However, if the quality of the original scan has some issues (e.g., lower quality, older dataset, etc. which often results in white matter surface extraction failure error) the default “-interp linear -template 1.00” setting will increase partial voluming and lightly blur everything upon resampling which seems to hide some defects. In these cases, CIVET can be rerun by removing “-interp sinc -template 0.5” command from the syntax to switch to its default interpolation.

> for file in input/*t1.mnc; do
echo CIVET_Processing_Pipeline -sourcedir input -targetdir output -lsq12 -mean-curvature \
  -mask-hippocampus -resample-surfaces -N3-distance 0 -correct-pve -surface-atlas AAL -thickness tlaplace 30:0 -area-fwhm 40:0 -volume-fwhm 40:0 -granular -run $(basename $file _t1.mnc)

Please see below for more information about other CIVET errors:

Failure due to motion artifacts , Failure due to inversion error , Failure due to gray surface under-expansion

Quality Control (CIC & Scinet)

In the output folders for each subject, you will find a "verify" image. This image, a .png file, displays the calculated masks, as well as grey and white matter classification. Use this image to examine the quality of the CIVET output.

You can copy them in a new folder for visual inspection. Our in-house manual for CIVET output quality control can be found here.

$ mkdir QC
$ cp -v output/*/verify/*verify.png QC/

Example of the verify:

PNC_verify_example

Make sure the last row, which is showing the borders of the cortex, doesn't show big errors.

Surface templates

After you have your output, you can compute the average surfaces of each hemisphere using the average_surface command.

> average_surfaces lh_average.obj none none 2  /data/scratch/civet-course/complete/output/*/surfaces/ADNI_*_gray_surface_rsl_left_81920.obj

Alternatively, you can access standard CIVET 2 surfaces at: /opt/quarantine/resources/CIVET/CIVET_2.0_icbm_avg_mid_sym_mc_left.obj

Cortical variables available

With CIVET you have the following morphological variables, which are shown in the following file examples:

-Cortical Volume

File example:

surfaces/prefix_id_surface_rsl_left_native_volume_40mm.txt

-Cortical Thickness

File example:

thickness/prefix_id_native_rms_rsl_tlaplace_30mm_left.txt

-Cortical Surface Area

File example:

surfaces/prefix_id_mid_surface_rsl_left_native_area_40mm.txt

-Mean Curvature

File example:

thickness/prefix_id_native_mc_rsl_30mm_mid_left.txt

Statistical Analysis in RMINC

We now move to use RMINC to investigate out output. If you are using Scinet, then you need to have R, Minc-toolkit and RMINC installed in your local computer and download the output from CIVET. From here on the tutorial applies to the use of R in any environment.

This data is no longer accessible, so you cannot follow along directly with the example. See the example file names above to decide which specific output files to use in your analysis.

Open up a new terminal in which to use RMINC, and log in to the compute node. Once logged in, load the required modules. Use module avail to identify the RMINC module in the list of available modules.

> module load anaconda minc-toolkit-v2 minc-stuffs R RMINC rstudio

Start R in this new terminal, and load the appropriate libraries:

> R
R> library (RMINC)

In order to investigate the data, you will need a table listing the IDs and associated variables. For this test, we have a table `/remotescratch/civet-course/complete/civet_ADNI.csv' listing the age, gender, and diagnosis of each of our subjects.

Read this file into R:

R> data <- read.csv('/data/scratch/civet-course/complete/civet_ADNI.csv')

Check to make sure the file was read in correctly:

R> names(data)
[1] "ID"     "DX_bl"  "Age"    "Gender" "CIVETQC"

As you can see, the names command lists the header of each column in your data table (with DX_bl as a group label). You can see all the data in a single column using the $ operator. For example, to view ages:

R> data$Age
[1] 79.3 70.7 84.6 73.4 63.6 75.8 90.9 71.8 83.4 76.0 83.3 59.9 77.0 79.7 71.4
[16] 76.5 87.8 84.5 69.5 69.5 72.0 73.5 70.8 78.6 57.1 78.3 72.0 71.5 80.8 76.9
[31] 75.5 76.4 68.5 78.6 85.3 71.3 70.0 84.6 72.9 89.1

This example shows Cortical Thickness.

Now we make a column that encodes all of the cortical thickness files from the CIVET output. Create a new column called data$left_thickness that contains references to all of the files:

R> data$left_thickness <- paste("/data/scratch/civet-course/complete/output/",data$ID, "/thickness/ADNI_",data$ID,"_native_rms_rsl_tlink_28.28mm_left.txt", sep="")

Note that if you are investigating surface area rather than cortical thickness, you will be creating a column called data$left_surfaces that contains reference to surface files, using the following command:

R> data$left_surfaces <- paste("/data/scratch/civet-course/complete/output/",data$ID, "/surfaces/ADNI_",data$ID,"_mid_surface_rsl_left_native_area_56.57mm.txt ", sep="")

For all subsequent instructions, substitute left_thickness for left_surfaces to perform a surface analysis rather than cortical thickness analysis.

Check your table for the existence of data$left_thickness

R> names(data)
[1] "ID"             "DX_bl"          "Age"            "Gender"        
[5] "QC"             "left_thickness"

Check to see that the files were coded correctly by outputting the contents of data$left_thickness

R> data$left_thickness
 [1] "/data/scratch/civet-course/complete/output/002_S_0559/thickness/ADNI_002_S_0559_native_rms_rsl_tlink_28.28mm_left.txt"
 [2] "/data/scratch/civet-course/complete/output/002_S_1018/thickness/ADNI_002_S_1018_native_rms_rsl_tlink_28.28mm_left.txt"
 [3] "/data/scratch/civet-course/complete/output/005_S_0553/thickness/ADNI_005_S_0553_native_rms_rsl_tlink_28.28mm_left.txt"
 [4] "/data/scratch/civet-course/complete/output/007_S_1222/thickness/ADNI_007_S_1222_native_rms_rsl_tlink_28.28mm_left.txt"
...
...

In another window, use ls to ensure the file paths you loaded exist

> ls /data/scratch/civet-course/complete/output/002_S_0559/thickness/ADNI_002_S_0559_native_rms_rsl_tlink_28.28mm_left.txt
/data/scratch/civet-course/complete/output/002_S_0559/thickness/ADNI_002_S_0559_native_rms_rsl_tlink_28.28mm_left.txt

Decide on your statistical model and implement. In this case we are going run a vertex-wise cortical thickness diagnosis contrast while accounting for age and sex in a linear model. First make sure that your reference group is the normal control group (if you have multiple groups in your sample, this will compare each of them to the reference separately).

Note that groups, and any other categorical variables in your data (e.g., sex), should be marked as factors so R interprets them as categorical.

R> data$DX_bl <- as.factor(data$DX_bl)
R> data$DX_bl <- relevel(data$DX_bl, ref="CN")
R> vs <- vertexLm(left_thickness ~ DX_bl + Age + Gender, data)

This will fail because we have some failed civet outputs. We can use the subset command to eliminate the failed files

R> vs <- vertexLm(left_thickness ~ DX_bl + Age + Gender, subset(data, data$CIVETQC==1))

Now we need to correct for multiple comparisons using FDR. The mask prevents non-cortical midline vertices, where CIVET thickness estimates are unreliable, from influencing our FDR threshold calculations (note that CIVET 2.0 and 2.1 masks are identical).

R> left_mask <- read.table('/opt/quarantine/resources/CIVET/CIVET_2.0_mask_left_short.txt')
R> vs_FDR <- vertexFDR(vs, mask=left_mask)

This yields the following output table

FDR Thresholds:
     F-statistic tvalue-(Intercept) tvalue-DX_blAD tvalue-Age tvalue-GenderMale
0.01    4.873942           2.732729       2.827867         NA                NA
0.05    3.011978           2.033562       2.086898   3.806157                NA
0.1     2.325597           2.016547       1.727432   3.297308                NA
0.15    1.935089           2.016547       1.500839   3.028188                NA
0.2     1.665635           2.016547       1.329860   2.828436                NA

The first column indicates the level of FDR correction (i.e: 1% FDR, 5% FDR, etc).

Now output the vertex-wise t-values into a .csv file. First look at the column headers for the linear model output

R> vs
Multidimensional Vertstats file
Columns:       F-statistic R-squared beta-(Intercept) beta-DX_blAD beta-Age beta-GenderMale tvalue-(Intercept) tvalue-DX_blAD tvalue-Age tvalue-GenderMale 

To output the vertex-wise t-values for your predictors:

R> write.csv(x=vs, file="civet_ADNI_contrast_tvals_Feb_2_2016.txt")

Visualizing results using create_civet_image.sh

Now that a linear model has been run with the steps outlined above, you are left with an output table of the FDR thresholds, average surface files for each hemisphere, and .txt files containing vertex-wise cortical thickness t-values for each hemisphere. These data will now be projected onto a brain map to visualize the spatial distribution of your statistics.

Here, we'll generate a map where highlighted vertices scale in colour from those passing 5% FDR correction to those passing 1% FDR correction (with two separate colour bars to mark t-values above or below zero). But, the script is highly flexible and can be used to generate images for statistical maps and thresholds of your choice.

Modules to load on CIC:

> module load minc-toolkit && module load minc-toolkit-extras

Module to load on Niagara:

> module load cobralab/2019b

If you run the script without any inputs like so:

> create_civet_image.sh

you will see a description of its usage and a template of inputs to fill in:

> create_civet_image.sh [--colourmap <arg>] [--left-statmap <arg>] [--right-statmap <arg>] [--left-thresholds <arg>] [--right-thresholds <arg>] [--left-mask <arg>] [--right-mask <arg>] [--colourbar-labels <arg>] [--image-label <arg>] [--(no-)annotate-directions] [--(no-)draw-colourbar] [-h|--help] [-d|--(no-)debug] <left-obj> <right-obj> <output>
        <left-obj>: The left object file.
        <right-obj>: The right object file.
        <output>: The filename for the output image.
        --colourmap: Colour map name, or FILE to colourize objects, provide comma-separated pair for negative,positive map. See 'colour_object' for built in options. (default: 'spectral')
        --left-statmap: Statistical map file to use for left object, use ":columnname" or ":columnnumber" to select column. (no default)
        --right-statmap: Statistical map file to use for left object, use ":columnname" or ":columnnumber" to select column. (no default)
        --left-thresholds: Comma separated thresholds which map statmap to colourmap, absolute values will be used on negative and positive side for diverging colourmaps, MIN and MAX to use detected data range. (default: 'MIN,MAX')
        --right-thresholds: Comma separated thresholds which map statmap to colourmap, absolute values will be used on negative and positive side for diverging colourmaps, MIN and MAX to use detected data range. (default: 'MIN,MAX')
        --left-mask: Binary mask file of same length as statmaps, which will be used to zero-out regions which should be excluded. (no default)
        --right-mask: Binary mask file of same length as statmaps, which will be used to zero-out regions which should be excluded. (no default)
        --colourbar-labels: Comma separated set of text labels which will be printed next to colourbar. (default: 'FDR 5%,FDR 1%')
        --image-label: Label placed at the top of the image, defaults to column name of left statmap. (default: 'LEFT_COLUMN')
        --annotate-directions, --no-annotate-directions: Label anatomical directions in image. (on by default)
        --draw-colourbar, --no-draw-colourbar: Draw colourbar on image. (on by default)
        -h, --help: Prints help
        -d, --debug, --no-debug: Show all internal comands and logic for debug. (off by default)

To build the image, you will need to customize the template with information from your data by specifying:

  1. Statmap files: where “left-statmap” and “right-statmap” are names of files with vertex-wise cortical thickness statistics for the left and right hemisphere, respectively. These can be generated in R using write.csv, as shown above. If statmap files are in .csv form with multiple columns, you must specify your target column name or number after the file name, e.g., civet_ADNI_contrast_tvals_Feb_2_2016.txt:'tvalue-DX_blAD'

  2. Thresholds: where “left-thresholds” and “right-thresholds” are the minimum and maximum of the range of values you'd like to highlight for the left and right hemisphere, respectively. These values are absolute and will mark vertices passing the thresholds above or below zero. In our case, for each hemisphere, our minimum and maximum will be t-value thresholds for 5% and 1% FDR correction, respectively. See below for an example. If you run create_civet_image without any thresholds, it will automatically plot the entire range of your data.

  3. Colour maps: where “colourmap” is a default colourmap name or the name of a colourmap file (any .lut available to you) which will be used to colour thresholded values. If you have negative and positive values which pass the threshold, you can input one colour map for the negative values and one for the positive by separating their names with a comma (e.g., --colourmap /opt/quarantine/resources/colourmaps/RMINC_blue.lut, /opt/quarantine/resources/colourmaps/RMINC_red.lut will apply RMINC's blue colourmap to negative values and RMINC's red colourmap to positive values)

  4. Masks: where “left-mask” and “right-mask” are file names of your left and right hemisphere masks, respectively.

  5. Surface files: where “left-obj” and “right-obj” are positional arguments replaced by names of your left and right surface template files, respectively. These are either standard CIVET surfaces or were generated using the average_surface command, both described above.

  6. Output file: where “output” is a positional argument replaced by an output file name.

This FDR table represents cortical thickness measures as a function of Age and Sex in the left hemisphere. If you would like to map the Age variable, the corresponding t-values for the 5% and 1% FDR thresholds are: 2.06,2.68

FDR_example

Combined with the FDR values for the right hemisphere, this script will output a .png file with lateral, medial, superior, and inferior views of our results, only showing those that pass the thresholds specified.

Taken together, a customized create.civet command like:

> create_civet_image.sh --left-statmap civet_ADNI_contrast_left.csv:"tvalue-age" --right-statmap civet_ADNI_contrast_right.csv:"tvalue-age" --left-thresholds 2.06,2.68 --right-thresholds 2.07,2.60 --colourmap /opt/quarantine/resources/colourmaps/RMINC_blue.lut,/opt/quarantine/resources/colourmaps/RMINC_red.lut --left-mask /opt/quarantine/resources/CIVET/CIVET_2.0_mask_left_short.txt --right-mask /opt/quarantine/resources/CIVET/CIVET_2.0_mask_right_short.txt /opt/quarantine/resources/CIVET/CIVET_2.0_icbm_avg_mid_sym_mc_left.obj /opt/quarantine/resources/CIVET/CIVET_2.0_icbm_avg_mid_sym_mc_right.obj CT_Age.png

would create a .png file like this (this is an arbitrary image which does not correspond to the data described above):

sample

Using CIVET-1.1.12

CIVET-1.1.12 has a known bug where the FWHM of blurring kernels is off by sqrt(2). Thankfully, this can be compensated for by adjusting the command-line parameters so that after the error the values are correct. From a post to minc-users mailing list:

Hi,

You have to multiply by sqrt(2) to obtain the correct fwhm in
CIVET-1.1.12 for surface blurring.

-thickness tlink fwhm_desired*sqrt(2)
-area-fwhm fwhm_desired*sqrt(2)
-volume-fwhm fwhm_desired*sqrt(2)

For example, if you want fwhm=20mm, you have to specify
20*sqrt(2)=28.28. However, since fwhm is used in the
filename for the thickness files, I suggest to round
fwhm if you don't want filenames with a dot in them.

If you used a value of 30mm before, then this corresponds
to an effective fwhm of 30/sqrt(2)=21.21mm.

Sorry for the inconvenience.

Claude LePage

DEPRECATED: Visualizing results using brain-view2

Our stats for this comparison are done, and it's time to look at the results. To do this we will need to unload CIVET, and load brain-view2. (NOTE: brain-view2 is not very customizable and is also no longer actively maintained, so is probably most useful if you just want to get a quick glance at your results. For more detailed visualization, skip directly to the next section.)

> module unload CIVET/1.1.12
> module load minc-toolkit/1.0.01
> module load brain-view2/1.0^minc-toolkit-1.0.01

Use brain-view2, the .obj file you created using the surfaces, and your output statistics table to view your results.

> brain-view2 lh_average.obj civet_ADNI_contrast_Feb_2_2016.txt 

Threshold brain-view2 in order to see the most significant stats that survive FDR at 1% and 5% thresholds as per the thresholds that were output in vertexFDR

Clone this wiki locally