Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

updated sphinx documentation and readme. Included now also png files …

…that are necessary to build the documentation.
  • Loading branch information...
commit ff2baeeea57ac43770739dba26b63ffa5a3cbc48 1 parent c01e484
Sebastian Walter authored
1  .gitignore
View
@@ -37,3 +37,4 @@ report
# exceptions
!.gitignore
+!.gitkeep
64 README.rst
View
@@ -8,7 +8,7 @@ Description:
The forward mode propagates univariate Taylor polynomials of arbitrary order.
Hence it is also possible to use AlgoPy to evaluate higher-order derivative tensors.
-
+
Speciality of AlgoPy is the possibility to differentiate functions that contain
matrix functions as +,-,*,/, dot, solve, qr, eigh, cholesky.
@@ -27,48 +27,55 @@ Documentation:
Available at http://packages.python.org/algopy/
For more documentation have a look at:
- 1) the talks in the documentation folder
- 2) the examples in the documentation/examples folder
-
-
+ 1) the talks in the ./documentation folder
+ 2) the examples in the ./documentation/examples folder
+ 3) sphinx documenation ./documentation/sphinx and run `make`
+
+
Example:
Compute directional derivatives of the function f(J)::
-
+
import numpy
- from algopy.utp import UTPM, qr, solve, dot, eigh
-
- def f(J):
- Q,R = qr(J)
- Id = numpy.eye(Np)
+ from algopy import UTPM, qr, solve, dot, eigh
+
+ def f(x):
+ N,M = x.shape
+ Q,R = qr(x)
+ Id = numpy.eye(M)
Rinv = solve(R,Id)
C = dot(Rinv,Rinv.T)
l,U = eigh(C)
return l[0]
+ x = UTPM.init_jacobian(numpy.random.random((50,10)))
+ y = f(x)
+ J = UTPM.extract_jacobian(y)
+
+ print 'Jacobian dy/dx =', J
+
-
Features:
Univariate Taylor Propagation:
-
+
* Univariate Taylor Propagation on Matrices (UTPM)
Implementation in: `algopy.utpm`
* Exact Interpolation of Higher Order Derivative Tensors:
(Hessians, etc.)
-
+
Reverse Mode:
-
+
ALGOPY also features functionality for convenient differentiation of a given
- algorithm. For that, the sequence of operation is recorded by tracing the
+ algorithm. For that, the sequence of operation is recorded by tracing the
evaluation of the algorithm. Implementation in: `./algopy/tracer.py`
-
+
Testing:
Uses numpy testing facilities. Simply run::
-
+
$ python -c "import algopy; algopy.test()"
-
+
Dependencies:
@@ -81,17 +88,18 @@ Dependencies:
Run tests:
* Nose
-
+
Documentation:
* sphinx
+ * matplotlib, mayavi2, yapgvb
Alternatives:
If you are looking for a robust tool for AD in Python you should try:
-
+
* `PYADOLC`_ a Python wrapper for ADOL-C (C++)
* `PYCPPAD`_ a Python wrapper for CppAD (C++)
-
+
However, their support for differentiation of Numerical Linear Algebra (NLA)
functions is only very limited.
@@ -108,19 +116,19 @@ Email:
Licence:
BSD style using http://www.opensource.org/licenses/bsd-license.php template
as it was on 2009-01-24 with the following substutions:
-
+
* <YEAR> = 2008-2009
* <OWNER> = Sebastian F. Walter, sebastian.walter@gmail.com
* <ORGANIZATION> = contributors' organizations
* In addition, "Neither the name of the contributors' organizations" was changed to "Neither the names of the contributors' organizations"
-
-
+
+
Copyright (c) 2008-2009, Seastian F. Walter
All rights reserved.
-
+
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
-
+
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
@@ -129,7 +137,7 @@ are permitted provided that the following conditions are met:
* Neither the names of the contributors' organizations nor the names of
its contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
-
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
9 documentation/sphinx/datastructure_and_algorithms.rst
View
@@ -1,27 +1,28 @@
Datastructure and Algorithms
+----------------------------
The mathematical object is a Univariate Taylor Polynomial over Matrices (UTPM)
.. math::
[x]_D = [x_0, \dots, x_{D-1}] = \sum_{d=0}^{D-1} x_d T^d \;,
-
+
where each :math:`x_d` is some array, e.g. a (5,7) array.
This mathematical object is described by numpy.ndarray with shape (D,P, 5,7).
P>1 triggers a vectorized function evaluation.
All algorithms are implemented
in the following fashion::
-
+
def add(x_data, y_data, z_data):
z_data[...] = x_data[...] + y_data[...]
-
+
where the inputs `x_data,y_data` are numpy.ndarray's and `add` changes the elements
of the numpy.ndarray z. I.e., the algorithms are implemented in a similar way as LAPACK or Fortran functions
in general. One can find the UTPM algorithms in `algopy/utpm/algorithms.py` where they are class functions
of a mixin class.
In practice, working with such algorithms is cumbersome. ALGOPY therefore also
-offers the class `algopy.UTPM` which is a thin wrapper around the algorithms
+offers the class `algopy.UTPM` which is a thin wrapper around the algorithms
and provides overloaded functions and operators. The data is saved in the attribute UTPM.data.
The following code shows how to use the algorithms
BIN  documentation/sphinx/example_tracer_cgraph.png
View
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 documentation/sphinx/examples/ampl_minimization_problem.rst
View
@@ -1,7 +1,14 @@
-Minimization Problem
---------------------
+Derivatives for Optimization
+----------------------------
-Computation of optimization relevant derivatives:
+Optimization algorithms often require:
* Hessian of the Lagrangian
* Diagonal elements of the Lagrangian
+ * etc.
+
+
+The following code shows how these derivatives can be evaluated in AlgoPy:
+
+.. literalinclude:: ampl_minimization_problem.py
+
BIN  documentation/sphinx/examples/explicit_euler.png
View
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
BIN  documentation/sphinx/examples/implicit_euler.png
View
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
BIN  documentation/sphinx/examples/mayavi_3D_plot.png
View
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 documentation/sphinx/examples/minimal_surface.py
View
@@ -30,7 +30,7 @@ def O_tilde(u):
def dO_tilde(u):
# use ALGOPY to compute the gradient
g = cg.gradient([u])[0]
-
+
# on the edge the analytical solution is fixed
# so search direction must be zero on the boundary
@@ -61,7 +61,7 @@ def P(x, s, alpha):
b = U - x_alpha
return x_alpha - 1.*(a<0) * a + b * 1. * (b<0)
-
+
s = - dffcn(x)
k = 0
while pgn(s)>epsilon and k<= max_iter:
@@ -100,7 +100,11 @@ def P(x, s, alpha):
# # Plot with MAYAVI
x = y = range(numpy.shape(Z)[0])
-import enthought.mayavi.mlab as mlab
+
+try:
+ import enthought.mayavi.mlab as mlab
+except:
+ import mayavi.mlab as mlab
mlab.figure()
mlab.view(azimuth=130)
s = mlab.surf(x, y, Z, representation='wireframe', warp_scale='auto', line_width=1.)
BIN  documentation/sphinx/examples/posterior_log_probability_cgraph.png
View
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
BIN  documentation/sphinx/examples/taylor_approximation.png
View
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
165 documentation/sphinx/index.rst
View
@@ -31,17 +31,13 @@ What can AlgoPy do for you?
* Hessian vector product
* vector Hessian vector product
* higher-order tensors
-
+
* Taylor series evaluation
* for modeling higher-order processes
- * can in principle be used to compute Taylor series expansions useful for ODE/DAE integration.
- Note that for efficient evaluation one would require to successively
- increase the degree of the Taylor polynomial arithmetic. This is not
- directly supported and thus AlgoPy requires :math:`d^3` instead of
- :math:`d^2` operations.
-
-
-
+ * could be used to compute Taylor series expansions for ODE/DAE integration.
+
+
+
Getting Started:
----------------
For the impatient, we show a minimalistic example how AlgoPy can be used to compute
@@ -51,7 +47,7 @@ derivatives.
:lines: 1-
If one executes that code one obtains as output::
-
+
$ python getting_started.py
jacobian = [ 135.42768462 41.08553692 15. ]
gradient = [array([ 135.42768462, 41.08553692, 15. ])]
@@ -63,7 +59,7 @@ If one executes that code one obtains as output::
Help improve AlgoPy
-------------------
-If you have any questions, suggestions or bug reports please use the mailing list
+If you have any questions, suggestions or bug reports please use the mailing list
http://groups.google.com/group/algopy?msg=new&lnk=gcis
or alternatively write me an email(sebastian.walter@gmail.com).
This will make it much easier for me to provide code/documentation that
@@ -71,11 +67,77 @@ is easier to understand. Of course, you are also welcome to contribute code,
bugfixes, examples, success stories ;), ...
-Current Issues
---------------
-The class `algopy.UTPM` is a replacement for `numpy.ndarray`.
-However, it is possible to have `numpy.ndarray`s with `algopy.UTPM` instances as
-elements. However, it is currently not possible to mix these operations.
+Issues and common pitfalls
+---------------------------
+
+.. warning::
+
+ Do not use `numpy.ndarray` with `algopy.UTPM` or `algopy.Function` as
+ elements!
+
+ Example 1::
+
+ import numpy, algopy
+
+ x = algopy.Function(1)
+ y = algopy.Function(2)
+
+ # wrong
+ z = numpy.array([x,y])
+
+ # right
+ z = algopy.zeros(2, dtype=x)
+ z[0] = x
+ z[1] = y
+
+ Example 2::
+
+ x = algopy.UTPM(numpy.ones((2,1)))
+ y = algopy.UTPM(2*numpy.ones((2,1)))
+
+ # wrong
+ z = numpy.array([x,y])
+
+ # right
+ z = algopy.zeros(2, dtype=x)
+ z[0] = x
+ z[1] = y
+
+
+.. warning::
+
+ Broadcasting does not work in the reverse mode with NumPy 1.6!
+ (it used to work for NumPy 1.4 ...)
+
+ This is due to unexpected behavior of numpy, see discussion on
+ http://thread.gmane.org/gmane.comp.python.numeric.general/51379
+
+
+ Example::
+
+ import numpy, algopy
+
+ def f(x):
+ N, = x.shape
+ A = algopy.zeros((N,N), dtype=x)
+ A[0,0] = x[2]
+ A[1,2] = x[1]
+ return algopy.sum(A*x)
+
+
+ cg = algopy.CGraph()
+
+ x = algopy.Function(numpy.ones(3))
+ y = f(x)
+
+ cg.independentFunctionList = [x]
+ cg.dependentFunctionList = [y]
+
+ print cg.gradient(numpy.array([1.,2.,3.]))
+
+ Such code may give wrong results on your machine if you happen to have the
+ wrong NumPy version.
+
Potential Improvements
@@ -86,26 +148,24 @@ Potential Improvements
* support for sparse Jacobian and sparse Hessian computations using graph
coloring as explained in http://portal.acm.org/citation.cfm?id=1096222
-
+
Related Work
-------------
+------------
AlgoPy has been influenced by the following publications:
* "ADOL-C: A Package for the Automatic Differentiation of Algorithms Written
in C/C++", Andreas Griewank, David Juedes, H. Mitev, Jean Utke, Olaf Vogel,
Andrea Walther
-
* "Evaluating Higher Derivative Tensors by Forward Propagation of Univariate
- Taylor Series", Andreas Griewank, Jean Utke and Andrea Walther
-
- * "Taylor series integration of differential-algebraic equations: automatic differentiation as a tool for
+ Taylor Series", Andreas Griewank, Jean Utke and Andrea Walther
+ * "Taylor series integration of differential-algebraic equations:
+ automatic differentiation as a tool for
simulating rigid body mechanical systems", Eric Phipps, phd thesis
-
* "Collected Matrix Derivative Results for Forward and Reverse Mode
- Algorithmic Differentiation", Mike Giles,
+ Algorithmic Differentiation", Mike Giles,
http://www.springerlink.com/content/h1750t57160w2782/
-
-
+
+
Installation and Upgrade:
-------------------------
@@ -119,9 +179,10 @@ Official releases:
- `$ easy_install --upgrade algopy` to upgrade to the newest version
Bleeding edge:
- * the most recent version is available at https://github.com/b45ch1/algopy .
- * includes additional documentation, e.g. talks and additional examples and the sphinx *.rst documents
-
+ * the most recent version is available at `https://github.com/b45ch1/algopy` .
+ * includes additional documentation, e.g. talks and additional examples and
+ the sphinx `*.rst` documents
+
Dependencies:
* numpy
* (optional/recommended) scipy (needed to support some linear algebra functions)
@@ -138,10 +199,10 @@ Forward Mode of AD:
with with matrix coefficients. More precisely, AlgoPy supports univariate Taylor
polynomial (UTP) arithmetic where the coefficients of the polynomial are numpy.ndarrays.
To distinguish Taylor polynomials from real vectors resp. matrices they are written with enclosing brackets:
-
+
.. math::
[x]_D = [x_0, \dots, x_{D-1}] = \sum_{d=0}^{D-1} x_d T^d \;,
-
+
where each :math:`x_0, x_1, \dots` are arrays, e.g. a (5,7) array.
This mathematical object is described by numpy.ndarray with shape (D,P, 5,7).
The :math:`T` is an indeterminate, i.e. a formal/dummy variable. Roughly speaking, this is the UTP equivalent to the imaginary number :math:`i` in complex arithmetic. The `P` can be used to compute several Taylor expansions at once. I.e., a vectorization to avoid the recomputation of the same functions with different inputs.
@@ -173,56 +234,60 @@ Talks:
* :download:`Informal talk at the IWR Heidelberg, April 29th, 2010<./talks/informal_talk_iwr_heidelberg_theory_and_tools_for_algorithmic_differentiation.pdf>`.
* :download:`Univariate Taylor polynomial arithmetic applied to matrix factorizations in the forward and reverse mode of algorithmic differentiation, June 3rd, 2010, EuroAD in Paderborn<./talks/walter_euroad2010_paderborn_univariate_taylor_polynomial_arithmetic_applied_to_matrix_factorizations_in_the_forward_and_reverse_mode_of_algorithmic_differentiation.pdf>`.
-
+
Simple Examples:
.. toctree::
:maxdepth: 1
-
+
+ getting_started.rst
examples/series_expansion.rst
examples/first_order_forward.rst
-
+
Advanced Examples:
.. toctree::
:maxdepth: 1
-
+
examples/covariance_matrix_computation.rst
examples/error_propagation.rst
examples/moore_penrose_pseudoinverse.rst
examples/ode_solvers.rst
examples/comparison_forward_reverse_mode.rst
-
+ examples/ampl_minimization_problem.rst
+
Application Examples:
.. toctree::
:maxdepth: 1
-
+
examples/posterior_log_probability.rst
examples/leastsquaresfitting.rst
examples/hessian_of_potential_function.rst
examples/minimal_surface.rst
-
+
Additional Information:
.. toctree::
:maxdepth: 1
-
+
datastructure_and_algorithms.rst
examples_tracer.rst
+ examples/polarization.rst
+ symbolic_differentiation.rst
Current Issues:
---------------
-
+
* some algorithms require vectors to be columns of a matrix.
I.e. if x is a vector it should be initialized as
x = UTPM(numpy.random.rand(D,P,N,1) and not as
UTPM(numpy.random.rand(D,P,N)) as one would typically do it using numpy.
-
- * there is no vectorized reverse mode yet. That means that one can compute
- columns of a Jacobian of dimension (M,N) by propagating N directional
+
+ * there is no vectorized reverse mode yet. That means that one can compute
+ columns of a Jacobian of dimension (M,N) by propagating N directional
derivatives at once. In the reverse mode one would like to propagate M
adjoint directions at once. However, this is not implemented yet, i.e. one
has to repeat the procedure M times.
@@ -254,7 +319,7 @@ Version Changelog
* added comparison operators <,>,<=,>=,== to UTPM
* added UTPM.init_jac_vec and UTPM.extract_jac_vec
* added CGraph.function, CGraph.gradient, CGraph.hessian, CGraph.hess_vec
-
+
* Version 0.3.1
* replaced algopy.sum by a faster implementation
* fixed a bug in getitem of the UTPM instance: now works also with numpy.int64
@@ -264,14 +329,20 @@ Version Changelog
* fixed bug in tracing operations involving neg(x)
* added algopy.outer
* changed API of CGraph.hessian, CGraph.jac_vec etc. One has now to write
- CGraph.jacobian(x) instead of CGraph.jacobian([x]).
-
+ CGraph.jacobian(x) instead of CGraph.jacobian([x]).
+
+* Version 0.3.2
+ * improved error reporting in the reverse mode: when "something goes wrong"
+ in cg.gradient([1.,2.,3.]) one now gets a much more detailed traceback
+ * added A.reshape(...) support to the reverse mode
+ * improved support for broadcasting for UTPM instances
+
Unit Test
---------
AlgoPy uses the same testing facilitities as NumPy. I.e., one can run the complete
unit test with::
-
+
$ python -c "import algopy; algopy.test()"
2  setup.py
View
@@ -55,7 +55,7 @@
PLATFORMS = ["all"]
MAJOR = 0
MINOR = 3
-MICRO = 1
+MICRO = 2
ISRELEASED = False
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
Please sign in to comment.
Something went wrong with that request. Please try again.