From 1be40c487dd70f88619e4b9f5322ac8c3eafc3b4 Mon Sep 17 00:00:00 2001 From: Alon Daks Date: Fri, 11 Dec 2015 00:35:09 -0800 Subject: [PATCH] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 20a6f1c..87a4d28 100644 --- a/README.md +++ b/README.md @@ -9,3 +9,6 @@ All code is packaged in a python module called `stat159lambda`. Ensure this module is on your python path with `export PYTHONPATH='/code'`. For example, `export PYTHONPATH='/Users/alondaks/project-lambda/code'`. +## Environment Variables +Data for a single subject is ~7.5 GBs due to 2 hour scan at 7 Tesla resolution. The preprocessing scripts are therefore memory intensive. Reproducing preprocessing code should be on a machine with 70+ GBs. In addition to downloading raw data, ``make data`` from the root project directory will download all preprocessed data. Setting a unix environment variable ``USE_CACHED_DATA`` will instruct scripts if they reexecute preprocessing or not. ``USE_CACHED_DATA='True'`` will bypass any presprocessing as long as the resulting preprocessed file exists in ``data/processed/``. ``USE_CACHED_DATA='False'`` will recalculate preprocessed files even if they exist in ``data/processed/``. ``USE_CACHED_DATA`` will default ``'True'``. +