Skip to content

Commit

Permalink
DOC: quick pass over docs -- trailing spaces/spelling
Browse files Browse the repository at this point in the history
  • Loading branch information
yarikoptic committed Apr 12, 2011
1 parent fdb9990 commit 5ca7e6e
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 36 deletions.
6 changes: 3 additions & 3 deletions doc/index.rst
Expand Up @@ -20,7 +20,7 @@
div.bodywrapper blockquote {
margin: 0 ;
}
</style>


Expand Down Expand Up @@ -51,9 +51,9 @@ User manual
Module reference
-----------------

.. autosummary::
.. autosummary::
:toctree: generated

Memory
Parallel
dump
Expand Down
2 changes: 1 addition & 1 deletion doc/installing.rst
Expand Up @@ -54,7 +54,7 @@ the changes are local to your account and easy to clean up.

#. In the directory created by expanding the `joblib` tarball, run the
following command::

python setup.py install --prefix ~/usr

You should not be required to become administrator, if you have
Expand Down
58 changes: 29 additions & 29 deletions doc/memory.rst
@@ -1,6 +1,6 @@
..
..
For doctests:
>>> from joblib.testing import warnings_to_stdout
>>> warnings_to_stdout()

Expand All @@ -21,18 +21,18 @@ the same arguments.

..
Commented out in favor of briefness
You can use it as a context, with its `eval` method:

.. automethod:: Memory.eval

or decorate functions with the `cache` method:

.. automethod:: Memory.cache

It works by explicitely saving the output to a file and it is designed to
It works by explicitly saving the output to a file and it is designed to
work with non-hashable and potentially large input and output data types
such as numpy arrays.
such as numpy arrays.

A simple example:
~~~~~~~~~~~~~~~~~
Expand All @@ -42,7 +42,7 @@ A simple example:
>>> from tempfile import mkdtemp
>>> cachedir = mkdtemp()

We can instanciate a memory context, using this cache directory::
We can instantiate a memory context, using this cache directory::

>>> from joblib import Memory
>>> memory = Memory(cachedir=cachedir, verbose=0)
Expand All @@ -55,7 +55,7 @@ A simple example:
... return x

When we call this function twice with the same argument, it does not
get executed the second time, an the output is loaded from the pickle
get executed the second time, and the output gets loaded from the pickle
file::

>>> print f(1)
Expand Down Expand Up @@ -87,7 +87,7 @@ usage (:func:`joblib.dump`).

In short, `memoize` is best suited for functions with "small" input and
output objects, whereas `Memory` is best suited for functions with complex
input and output objects, and agressive persistence to the disk.
input and output objects, and aggressive persistence to the disk.


Using with `numpy`
Expand Down Expand Up @@ -174,9 +174,9 @@ return value is loaded from the disk using memmapping::
We need to close the memmap file to avoid file locking on Windows; closing
numpy.memmap objects is done with del, which flushes changes to the disk

>>> del res

.. note::

If the memory mapping mode used was 'r', as in the above example, the
Expand All @@ -198,9 +198,9 @@ return value is loaded from the disk using memmapping::
Gotchas
--------

* **Function cache is identified by the function's name**. Thus if you have
the same name to different functions, their cache will override
each-others (you have 'name collisions'), and you will get unwanted
* **Function cache is identified by the function's name**. Thus if you have
the same name to different functions, their cache will override
each-others (you have 'name collisions'), and you will get unwanted
re-run::

>>> @memory.cache
Expand Down Expand Up @@ -230,7 +230,7 @@ Gotchas

>>> f = memory.cache(lambda : my_print(1))
>>> g = memory.cache(lambda : my_print(2))

>>> f()
1
>>> f()
Expand All @@ -241,32 +241,32 @@ Gotchas
>>> f()
1

..
..
Thus to use lambda functions reliably, you have to specify the name
used for caching::
FIXME

# >>> f = make(func=lambda : my_print(1), cachedir=cachedir, name='f')
# >>> g = make(func=lambda : my_print(2), cachedir=cachedir, name='g')
#
#
# >>> f()
# 1
# >>> g()
# 2
# >>> f()

* **memory cannot be used on some complex objects**, eg a callable
* **memory cannot be used on some complex objects**, e.g. a callable
object with a `__call__` method.

Howevers, it works on numpy ufuncs::
However, it works on numpy ufuncs::

>>> sin = memory.cache(np.sin)
>>> print sin(0)
0.0

* **caching methods**: you cannot decorate a method at class definition,
because when the class is instanciated, the first argument (self) is
because when the class is instantiated, the first argument (self) is
*bound*, and no longer accessible to the `Memory` object. The following
code won't work::

Expand All @@ -276,7 +276,7 @@ Gotchas
def method(self, args):
pass

The right way to do this is to decorate at instanciation time::
The right way to do this is to decorate at instantiation time::

class Foo(object):

Expand Down Expand Up @@ -312,7 +312,7 @@ Useful methods of decorated functions
--------------------------------------

Function decorated by :meth:`Memory.cache` are :class:`MemorizedFunc`
objects that, in addtion of behaving like normal functions, expose
objects that, in addition of behaving like normal functions, expose
methods useful for cache exploration and management.

.. autoclass:: MemorizedFunc
Expand All @@ -322,14 +322,14 @@ methods useful for cache exploration and management.

..
Let us not forget to clean our cache dir once we are finished::
>>> import shutil
>>> shutil.rmtree(cachedir)
>>> import shutil
>>> shutil.rmtree(cachedir2)

And we check that it has indeed been remove::

>>> import os ; os.path.exists(cachedir)
False
>>> os.path.exists(cachedir2)
Expand Down
6 changes: 3 additions & 3 deletions doc/why.rst
Expand Up @@ -31,7 +31,7 @@ Provenance tracking for understanding the code
.. topic:: But pipeline frameworks can get in the way
:class: warning

We want our code to look like the underlying algorithm,
We want our code to look like the underlying algorithm,
not like a software framework.

Joblib's approach
Expand All @@ -49,12 +49,12 @@ Design choices

* No dependencies other than Python

* Robust, well-tested code, at the cost of functionnality
* Robust, well-tested code, at the cost of functionality

* Fast and suitable for scientific computing on big dataset without
changing the original code

* Only local imports: **embed joblib in your code by copying it**
* Only local imports: **embed joblib in your code by copying it**



0 comments on commit 5ca7e6e

Please sign in to comment.