Skip to content

Commit

Permalink
doc updates
Browse files Browse the repository at this point in the history
  • Loading branch information
qmac committed Apr 26, 2018
1 parent 3ddf2a0 commit d05b1e3
Show file tree
Hide file tree
Showing 11 changed files with 124 additions and 121 deletions.
2 changes: 1 addition & 1 deletion docs/_includes/_replacements.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
.. |LengthExtractor| replace:: :py:class:`LengthExtractor`
.. |GoogleVisionAPILabelExtractor| replace:: :py:class:`GoogleVisionAPILabelExtractor`
.. |GoogleVisionAPIFaceExtractor| replace:: :py:class:`GoogleVisionAPIFaceExtractor`
.. |ClarifaiAPIExtractor| replace:: :py:class:`ClarifaiAPIExtractor`
.. |ClarifaiAPIImageExtractor| replace:: :py:class:`ClarifaiAPIImageExtractor`
.. |PredefinedDictionaryExtractor| replace:: :py:class:`PredefinedDictionaryExtractor`
.. |IndicoAPIExtractor| replace:: :py:class:`IndicoAPIExtractor`
.. |IndicoAPITextExtractor| replace:: :py:class:`IndicoAPITextExtractor`
Expand Down
2 changes: 1 addition & 1 deletion docs/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ cache_transformers (bool)
~~~~~~~~~~~~~~~~~~~~~~~~~
When set to ``True``, the output produced by all ``.transform()`` call will be cached in memory (filesystem caching is not currently available). This is the default, and can be very useful in cases where (a) many calls to commercial feature extraction services (e.g., the Google or IBM families of Extractors) are being made, or (b) there are intermediate |Stim| representations generated by |Converter| classes that are computationally expensive to produce. Setting ``cache_transformers`` to ``False`` will result in every ``transform()`` call being recomputed, with no intermediates stored in memory.

Note that caching in pliers (really, memoization) is based on the combination of the |Transformer| class, its initialization parameters, and the id of the input |Stim|. If any of these changes, results will be computed anew. So, for example, creating two separate instances of the |ClarifaiAPIExtractor|, each with different ``model`` arguments, will result in two separate calls being made to the Clarifai API even if the exact same |Stim| inputs are passed. (However, different instances of the same |ClarifaiAPIExtractor| initialized using the same arguments will still point to the same entry in the cache.)
Note that caching in pliers (really, memoization) is based on the combination of the |Transformer| class, its initialization parameters, and the id of the input |Stim|. If any of these changes, results will be computed anew. So, for example, creating two separate instances of the |ClarifaiAPIImageExtractor|, each with different ``model`` arguments, will result in two separate calls being made to the Clarifai API even if the exact same |Stim| inputs are passed. (However, different instances of the same |ClarifaiAPIImageExtractor| initialized using the same arguments will still point to the same entry in the cache.)

default_converters (dict)
~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion docs/graphs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The second element in each tuple contains any children nodes-—i.e., nodes to w
nodes = [
(FrameSamplingFilter(hertz=2),
['GoogleVisionAPIFaceExtractor',
'ClarifaiAPIExtractor',
'ClarifaiAPIImageExtractor',
'GoogleVisionAPILabelExtractor'])
]

Expand Down
10 changes: 6 additions & 4 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,9 @@ While installing pliers itself is usually straightforward, setting up some of th
+---------------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+--------------------------------+---------------------------------------+
| IndicoAPIExtractor | `Indico.io API <https://indico.io>`__ | INDICO\_APP\_KEY | API key | 45f9f8a56e4194d3dce858db1e5c3ae4 |
+---------------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+--------------------------------+---------------------------------------+
| ClarifaiAPIExtractor | `Clarifai image recognition API <https://clarifai.com>`__ | CLARIFAI\_API\_KEY | API key | 168ed02e137459ead66c3a661be7b784 |
| ClarifaiAPIImageExtractor | `Clarifai image recognition API <https://clarifai.com>`__ | CLARIFAI\_API\_KEY | API key | 168ed02e137459ead66c3a661be7b784 |
+---------------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+--------------------------------+---------------------------------------+
| ClarifaiAPIVideoExtractor | `Clarifai video tagging API <https://clarifai.com>`__ | CLARIFAI\_API\_KEY | API key | 168ed02e137459ead66c3a661be7b784 |
+---------------------------------------+-----------------------------------------------------------------------------------------------------+--------------------------------------+--------------------------------+---------------------------------------+

\* Note that this is not the plaintext e-mail or username for your IBM services account
Expand All @@ -69,9 +71,9 @@ Once you've obtained API keys for the services you intend to use, there are two

::

from pliers.extractors import ClarifaiAPIExtractor
ext = ClarifaiAPIExtractor(app_id='my_clarifai_app_id',
app_secret='my_clarifai_app_secret')
from pliers.extractors import ClarifaiAPIImageExtractor
ext = ClarifaiAPIImageExtractor(app_id='my_clarifai_app_id',
app_secret='my_clarifai_app_secret')

Alternatively, you can store the appropriate values as environment variables, in which case you can initialize a Transformer without any arguments. This latter approach is generally preferred, as it doesn't require you to hardcode potentially sensitive values into your code. The mandatory environment variable names for each service are listed in the table above.

Expand Down
76 changes: 38 additions & 38 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1343,25 +1343,25 @@ also demonstrates the concept of *chaining* multiple Transformer
objects. We first convert a video to a series of images, and then apply
an object-detection ``Extractor`` to each image.

Note, as with other examples above, that the ``ClarifaiAPIExtractor``
Note, as with other examples above, that the ``ClarifaiAPIImageExtractor``
wraps the Clarifai object recognition API, so you’ll need to have an API
key set up appropriately (if you don’t have an API key, and don’t want
to set one up, you can replace ``ClarifaiAPIExtractor`` with
to set one up, you can replace ``ClarifaiAPIImageExtractor`` with
``TensorFlowInceptionV3Extractor`` to get similar, though not quite as
accurate, results).

::

from pliers.filters import FrameSamplingFilter
from pliers.extractors import ClarifaiAPIExtractor, merge_results
from pliers.extractors import ClarifaiAPIImageExtractor, merge_results
video = join(get_test_data_path(), 'video', 'small.mp4')
# Sample 2 frames per second
sampler = FrameSamplingFilter(hertz=2)
frames = sampler.transform(video)
ext = ClarifaiAPIExtractor()
ext = ClarifaiAPIImageExtractor()
results = ext.transform(frames)
df = merge_results(results, )
df
Expand Down Expand Up @@ -1395,18 +1395,18 @@ accurate, results).
<th>duration</th>
<th>order</th>
<th>object_id</th>
<th>ClarifaiAPIExtractor#Lego</th>
<th>ClarifaiAPIImageExtractor#Lego</th>
<th>...</th>
<th>ClarifaiAPIExtractor#power</th>
<th>ClarifaiAPIExtractor#precision</th>
<th>ClarifaiAPIExtractor#production</th>
<th>ClarifaiAPIExtractor#research</th>
<th>ClarifaiAPIExtractor#robot</th>
<th>ClarifaiAPIExtractor#science</th>
<th>ClarifaiAPIExtractor#still life</th>
<th>ClarifaiAPIExtractor#studio</th>
<th>ClarifaiAPIExtractor#technology</th>
<th>ClarifaiAPIExtractor#toy</th>
<th>ClarifaiAPIImageExtractor#power</th>
<th>ClarifaiAPIImageExtractor#precision</th>
<th>ClarifaiAPIImageExtractor#production</th>
<th>ClarifaiAPIImageExtractor#research</th>
<th>ClarifaiAPIImageExtractor#robot</th>
<th>ClarifaiAPIImageExtractor#science</th>
<th>ClarifaiAPIImageExtractor#still life</th>
<th>ClarifaiAPIImageExtractor#studio</th>
<th>ClarifaiAPIImageExtractor#technology</th>
<th>ClarifaiAPIImageExtractor#toy</th>
</tr>
</thead>
<tbody>
Expand Down Expand Up @@ -1739,7 +1739,7 @@ examples above.
from os.path import join
from pliers.filters import FrameSamplingFilter
from pliers.converters import GoogleSpeechAPIConverter
from pliers.extractors import (ClarifaiAPIExtractor, GoogleVisionAPIFaceExtractor,
from pliers.extractors import (ClarifaiAPIImageExtractor, GoogleVisionAPIFaceExtractor,
ComplexTextExtractor, PredefinedDictionaryExtractor,
STFTAudioExtractor, VADERSentimentExtractor,
merge_results)
Expand All @@ -1754,7 +1754,7 @@ examples above.
sampler = FrameSamplingFilter(every=10)
frames = sampler.transform(video)
obj_ext = ClarifaiAPIExtractor()
obj_ext = ClarifaiAPIImageExtractor()
obj_features = obj_ext.transform(frames)
features.append(obj_features)
Expand Down Expand Up @@ -1839,7 +1839,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.970786</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[0]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1854,7 +1854,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.976996</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[10]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1869,7 +1869,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.972223</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[20]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1884,7 +1884,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.98288</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[30]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1899,7 +1899,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.94764</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[40]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1914,7 +1914,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.952409</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[50]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1929,7 +1929,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.951445</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[60]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1944,7 +1944,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.954552</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[70]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1959,7 +1959,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.953084</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[80]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -1974,7 +1974,7 @@ examples above.
<td>0.833333</td>
<td>administration</td>
<td>0.947371</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[90]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand Down Expand Up @@ -2025,7 +2025,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
# Define nodes
nodes = [
(FrameSamplingFilter(every=10),
['ClarifaiAPIExtractor', 'GoogleVisionAPIFaceExtractor']),
['ClarifaiAPIImageExtractor', 'GoogleVisionAPIFaceExtractor']),
(STFTAudioExtractor(freq_bins=[(100, 300)])),
('GoogleSpeechAPIConverter',
['ComplexTextExtractor',
Expand Down Expand Up @@ -2087,7 +2087,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.970786</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[0]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2102,7 +2102,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.976996</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[10]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2117,7 +2117,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.972223</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[20]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2132,7 +2132,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.98288</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[30]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2147,7 +2147,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.94764</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[40]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2162,7 +2162,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.952409</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[50]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2177,7 +2177,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.951445</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[60]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2192,7 +2192,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.954552</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[70]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2207,7 +2207,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.953084</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[80]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand All @@ -2222,7 +2222,7 @@ image, audio or text inputs, using state-of-the-art tools and services!
<td>0.833333</td>
<td>administration</td>
<td>0.947371</td>
<td>ClarifaiAPIExtractor</td>
<td>ClarifaiAPIImageExtractor</td>
<td>frame[90]</td>
<td>VideoFrameStim</td>
<td>None</td>
Expand Down
3 changes: 2 additions & 1 deletion docs/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,8 @@ Extractors (:mod:`pliers.extractors`)
:template: _class.rst

BrightnessExtractor
ClarifaiAPIExtractor
ClarifaiAPIImageExtractor
ClarifaiAPIVideoExtractor
FaceRecognitionFaceEncodingsExtractor
FaceRecognitionFaceLandmarksExtractor
FaceRecognitionFaceLocationsExtractor
Expand Down
4 changes: 2 additions & 2 deletions docs/transformers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ At present, pliers implements several dozen |Extractor| classes that span a wide
:template: _class.rst

BrightnessExtractor
ClarifaiAPIExtractor
ClarifaiAPIImageExtractor
FaceRecognitionFaceEncodingsExtractor
FaceRecognitionFaceLandmarksExtractor
FaceRecognitionFaceLocationsExtractor
Expand Down Expand Up @@ -120,7 +120,7 @@ At present, pliers implements several dozen |Extractor| classes that span a wide

FarnebackOpticalFlowExtractor

Note that, in practice, the number of features one can extract using the above classes is extremely large, because many of these Extractors return open-ended feature sets that are determined by the contents of the input |Stim| and/or the specified initialization arguments. For example, most of the image-labeling Extractors that rely on deep learning-based services (e.g., |GoogleVisionAPILabelExtractor| and |ClarifaiAPIExtractor|) will return feature information for any of the top N objects detected in the image. And the |PredefinedDictionaryExtractor| provides a standardized interface to a large number of online word lookup dictionaries (e.g., word norms for written frequency, age-of-acquisition, emotionality ratings, etc.).
Note that, in practice, the number of features one can extract using the above classes is extremely large, because many of these Extractors return open-ended feature sets that are determined by the contents of the input |Stim| and/or the specified initialization arguments. For example, most of the image-labeling Extractors that rely on deep learning-based services (e.g., |GoogleVisionAPILabelExtractor| and |ClarifaiAPIImageExtractor|) will return feature information for any of the top N objects detected in the image. And the |PredefinedDictionaryExtractor| provides a standardized interface to a large number of online word lookup dictionaries (e.g., word norms for written frequency, age-of-acquisition, emotionality ratings, etc.).

Working with Extractor results
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
Loading

0 comments on commit d05b1e3

Please sign in to comment.