sparsegrad - automatic computation of sparse Jacobian
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
doc
examples
sparsegrad
.gitignore
.travis.yml
LICENSE
MANIFEST.in
README.rst
requirements.txt
setup.cfg
setup.py

README.rst

sparsegrad

Travis-CI-badge Readthedocs-badge

sparsegrad automatically and efficiently calculates analytical Jacobian of numpy vector valued functions. It is designed to be useful for solving large systems of non-linear equations. sparsegrad is memory efficient because it does not use the graph of computation. Arbitrary computations are supported through indexing, matrix multiplication, branching, and custom functions.

Taking Jacobian with respect to variable x is done by replacing numerical value of x with sparsegrad seed

>>> import numpy as np
>>> import sparsegrad.forward as ad
>>> def f(x):
...       return x-x[::-1]
>>> x=np.linspace(0,1,3)
>>> print(f(ad.seed(x)).dvalue)
(0, 0)      1.0
(0, 2)      -1.0
(2, 0)      -1.0
(2, 2)      1.0

sparsegrad is written in pure Python. For easy installation and best portability, it does not contain extension modules. In realistic problems, it can provide similar or better performance than ADOL-C best case of repeated calculation. This is possible thanks to algorithmic optimizations and optimizations to avoid slow parts of scipy.sparse.

sparsegrad relies on numpy and scipy for computations. It is compatible with both Python 2.7 and 3.x.

Installation

pip install sparsegrad

It is recommended to run test suite after installing

python -c "import sparsegrad; sparsegrad.test()"