Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

RF: tutorial_lib.py -> mvpa2.turorial_suite (+boosted tarball version…

… to 0.3)
  • Loading branch information...
commit 7ca38d10e52bf4e76841fa4ff209d1176247e44e 1 parent 17ccae8
@yarikoptic yarikoptic authored
View
6 Makefile
@@ -373,7 +373,7 @@ dt-%: build
| grep -v filter.py | grep -v channel.py | grep "$*")
tm-%: build
- @PYTHONPATH=.:$(TUT_DIR):$(CURDIR)/doc/examples:$(PYTHONPATH) \
+ @PYTHONPATH=.:$(CURDIR)/doc/examples:$(PYTHONPATH) \
MVPA_MATPLOTLIB_BACKEND=agg \
MVPA_LOCATION_TUTORIAL_DATA=$(TUT_DIR) \
MVPA_DATADB_ROOT=datadb \
@@ -382,7 +382,7 @@ tm-%: build
testmanual: build testdocstrings
@echo "I: Testing code samples found in documentation"
- @PYTHONPATH=.:$(TUT_DIR):$(PYTHONPATH) \
+ @PYTHONPATH=.:$(PYTHONPATH) \
MVPA_MATPLOTLIB_BACKEND=agg \
MVPA_LOCATION_TUTORIAL_DATA=$(TUT_DIR) \
MVPA_DATADB_ROOT=datadb \
@@ -391,7 +391,7 @@ testmanual: build testdocstrings
testtutorial-%: build
@echo "I: Testing code samples found in tutorial part $*"
- @PYTHONPATH=.:$(TUT_DIR):$(PYTHONPATH) \
+ @PYTHONPATH=.:$(PYTHONPATH) \
MVPA_MATPLOTLIB_BACKEND=agg \
MVPA_LOCATION_TUTORIAL_DATA=$(TUT_DIR) \
$(NOSETESTS) --with-doctest --doctest-extension .rst \
View
9 doc/source/datadb/tutorial_data.rst
@@ -88,10 +88,6 @@ start_tutorial_session.sh
Helper shell script to start an interactive session within IPython
to proceed with the tutorial code.
-tutorial_lib.py
- Helper Python module used through out the tutorial to avoid
- presenting sequences of common operations (e.g. loading data)
- multiple times.
Instructions
============
@@ -130,6 +126,11 @@ objects in ventral temporal cortex. Science 293, 2425–2430.
Changelog
=========
+0.3
+
+ * Removed tutorial_lib.py which is superseeded by using
+ mvpa2.tutorial_suite
+
0.2
* Updated tutorial code to work with PyMVPA 0.6
View
2  doc/source/tutorial_classifiers.rst
@@ -18,7 +18,7 @@ This is already the second time that we will engage in a classification
analysis, so let's first recap what we did before in the :ref:`first tutorial
part <chap_tutorial_start>`:
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> ds = get_haxby2001_data()
>>> clf = kNN(k=1, dfx=one_minus_correlation, voting='majority')
>>> cvte = CrossValidation(clf, HalfPartitioner(attr='runtype'))
View
2  doc/source/tutorial_datasets.rst
@@ -22,7 +22,7 @@ take a look at what a dataset consists of, and how it works.
In the simplest case, a dataset only contains *data* that is a matrix of
numerical values.
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> data = [[ 1, 1, -1],
... [ 2, 0, 0],
... [ 3, 1, 1],
View
2  doc/source/tutorial_eventrelated.rst
@@ -28,7 +28,7 @@ This is a common thing to do, for example, in ERP-analyses of EEG data.
Here we are going to employ a similar approach in our well-known example
data -- this time selecting a subset of ventral temporal regions.
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> ds = get_raw_haxby2001_data(roi=(36,38,39,40))
>>> print ds.shape
(1452, 941)
View
2  doc/source/tutorial_mappers.rst
@@ -39,7 +39,7 @@ new method to create the dataset, the ``dataset_wizard``. Here it is, fully
equivalent to a regular constructor call (i.e. `~mvpa2.datasets.base.Dataset`),
but we will shortly see some nice convenience aspects.
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> ds = dataset_wizard(np.ones((5, 12)))
>>> ds.shape
(5, 12)
View
2  doc/source/tutorial_searchlight.rst
@@ -36,7 +36,7 @@ recreate our preprocessed demo dataset. The code is taken verbatim from the
:ref:`previous tutorial part <chap_tutorial_classifiers>` and should raise
no questions. We get a dataset with one sample per category per run.
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> # alt: `ds = load_tutorial_results('ds_haxby2001')`
>>> ds = get_haxby2001_data(roi='vt')
>>> ds.shape
View
2  doc/source/tutorial_sensitivity.rst
@@ -22,7 +22,7 @@ to look at another approach to localization. To get started, we pre-process
the data as we have done before and perform volume averaging to get a
single sample per stimulus category and original experiment session.
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> # alt: `ds = load_tutorial_results('ds_haxby2001_blkavg_brain')`
>>> ds = get_raw_haxby2001_data(roi='brain')
>>> print ds.shape
View
2  doc/source/tutorial_significance.rst
@@ -218,7 +218,7 @@ To allow for easy inspection of dataset to prevent such obvious confounds,
`Dataset`) was constructed. Lets have yet another look at our 8-categories
dataset:
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
>>> # alt: `ds = load_tutorial_results('ds_haxby2001')`
>>> ds = get_haxby2001_data(roi='vt')
>>> print ds.summary()
View
2  doc/source/tutorial_start.rst
@@ -23,7 +23,7 @@ functionality provided elsewhere. We start this tutorial by importing some
little helpers (including all of PyMVPA) we are going to use in the tutorial,
and whose purpose we are going to see shortly.
->>> from tutorial_lib import *
+>>> from mvpa2.tutorial_suite import *
Getting the data
================
View
0  tools/tutorial_lib.py → mvpa2/tutorial_suite.py
File renamed without changes
Please sign in to comment.
Something went wrong with that request. Please try again.