Skip to content
master
Go to file
Code

Latest commit

* Bugfix: rounding of discrete distribution

Rounding at 0.5 is a touchy subject as the rules differs between
implementation. The wrong assumption makes the edge cases for discrete
distributions a bit problematic as NN.5 numbers are used to construct
piecewise linear distributions. To fix this at the limits a nudge is
added to keep the rounding correct.

* Bugfix: mixing of discrete/continuous vars
8a96378

Git stats

Files

Permalink
Failed to load latest commit information.

README.rst

circleci codecov readthedocs downloads pypi binder

Chaospy is a numerical tool for performing uncertainty quantification using polynomial chaos expansions and advanced Monte Carlo methods implemented in Python.

Installation

Installation should be straight forward using pip:

$ pip install chaospy

For more installation details, see the installation guide.

Example Usage

chaospy is created to work well inside numerical Python ecosystem. You therefore typically need to import Numpy along side chaospy:

>>> import numpy
>>> import chaospy

chaospy is problem agnostic, so you can use your own code using any means you find fit. The only requirement is that the output is compatible with numpy.ndarray format:

>>> coordinates = numpy.linspace(0, 10, 100)

>>> def forward_solver(coordinates, parameters):
...     """Function to do uncertainty quantification on."""
...     param_init, param_rate = parameters
...     return param_init*numpy.e**(-param_rate*coordinates)

We here assume that parameters contains aleatory variability with known probability. We formalize this probability in chaospy as a joint probability distribution. For example:

>>> distribution = chaospy.J(chaospy.Uniform(1, 2), chaospy.Normal(0, 2))

>>> print(distribution)
J(Uniform(lower=1, upper=2), Normal(mu=0, sigma=2))

Most probability distributions have an associated expansion of orthogonal polynomials. These can be automatically constructed:

>>> expansion = chaospy.generate_expansion(8, distribution)

>>> print(expansion[:5].round(8))
[1.0 q1 q0-1.5 q0*q1-1.5*q1 q0**2-3.0*q0+2.16666667]

Here the polynomial is defined positional, such that q0 and q1 refers to the uniform and normal distribution respectively.

The distribution can also be used to create (pseudo-)random samples and low-discrepancy sequences. For example to create Sobol sequence samples:

>>> samples = distribution.sample(1000, rule="sobol")

>>> print(samples[:, :4].round(8))
[[ 1.5         1.75        1.25        1.375     ]
 [ 0.         -1.3489795   1.3489795  -0.63727873]]

We can evaluating the forward solver using these samples:

>>> evaluations = numpy.array([forward_solver(coordinates, sample)
...                            for sample in samples.T])

>>> print(evaluations[:3, :5].round(8))
[[1.5        1.5        1.5        1.5        1.5       ]
 [1.75       2.00546578 2.29822457 2.63372042 3.0181921 ]
 [1.25       1.09076905 0.95182169 0.83057411 0.72477163]]

Having all these components in place, we have enough components to perform point collocation. Or in other words, we can create a polynomial approximation of forward_solver:

>>> approx_solver = chaospy.fit_regression(expansion, samples, evaluations)

>>> print(approx_solver[:2].round(4))
[q0 -0.0002*q0*q1**3+0.0051*q0*q1**2-0.101*q0*q1+q0]

Since the model approximations are polynomials, we can do inference on them directly. For example:

>>> expected = chaospy.E(approx_solver, distribution)
>>> deviation = chaospy.Std(approx_solver, distribution)

>>> print(expected[:5].round(8))
[1.5        1.53092356 1.62757217 1.80240142 2.07915608]
>>> print(deviation[:5].round(8))
[0.28867513 0.43364958 0.76501802 1.27106355 2.07110879]

For more extensive guides on this approach an others, see the tutorial collection.

You can’t perform that action at this time.