Skip to content

Commit

Permalink
Merge pull request #186 from QMCSoftware/develop
Browse files Browse the repository at this point in the history
Develop
  • Loading branch information
alegresor committed Sep 2, 2020
2 parents 33e5fc4 + be33fcb commit 6c70795
Show file tree
Hide file tree
Showing 31 changed files with 1,047 additions and 154 deletions.
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ install:
script:
- coverage run --source=./ -m unittest discover -s test/fasttests
- coverage run --append --source=./ -m unittest discover -s test/longtests
- cd qmcpy && pytest --doctest-modules --disable-pytest-warnings


after_success:
Expand Down
9 changes: 3 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,12 @@
[![Build Status](https://travis-ci.com/QMCSoftware/QMCSoftware.png?branch=master)](https://travis-ci.com/QMCSoftware/QMCSoftware)
[![codecov](https://codecov.io/gh/QMCSoftware/QMCSoftware/branch/master/graph/badge.svg)](https://codecov.io/gh/QMCSoftware/QMCSoftware)
[![Documentation Status](https://readthedocs.org/projects/qmcpy/badge/?version=latest)](https://qmcpy.readthedocs.io/en/latest/?badge=latest)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3964490.svg)](https://doi.org/10.5281/zenodo.3964490)
[![Build Status](https://travis-ci.com/QMCSoftware/QMCSoftware.png?branch=master)](https://travis-ci.com/QMCSoftware/QMCSoftware) [![codecov](https://codecov.io/gh/QMCSoftware/QMCSoftware/branch/master/graph/badge.svg)](https://codecov.io/gh/QMCSoftware/QMCSoftware) [![Documentation Status](https://readthedocs.org/projects/qmcpy/badge/?version=latest)](https://qmcpy.readthedocs.io/en/latest/?badge=latest) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3964490.svg)](https://doi.org/10.5281/zenodo.3964490)

# Quasi-Monte Carlo Community Software

Quasi-Monte Carlo (QMC) methods are used to approximate multivariate integrals. They have four main components: an integrand, a discrete distribution, summary output data, and stopping criterion. Information about the integrand is obtained as a sequence of values of the function sampled at the data-sites of the discrete distribution. The stopping criterion tells the algorithm when the user-specified error tolerance has been satisfied. We are developing a framework that allows collaborators in the QMC community to develop plug-and-play modules in an effort to produce more efficient and portable QMC software. Each of the above four components is an abstract class. Abstract classes specify the common properties and methods of all subclasses. The ways in which the four kinds of classes interact with each other are also specified. Subclasses then flesh out different integrands, sampling schemes, and stopping criteria. Besides providing developers a way to link their new ideas with those implemented by the rest of the QMC community, we also aim to provide practitioners with state-of-the-art QMC software for their applications.

<center><img src="https://github.com/QMCSoftware/QMCSoftware/blob/master/sphinx/logo/qmcpy_logo.png?raw=true" alt="QMCPy logo" height=200px width=200px/>
<img src="https://github.com/QMCSoftware/QMCSoftware/blob/master/sphinx/logo/qmcpy_logo.png?raw=true" alt="QMCPy logo" height=200px width=200px/>

[Homepage](https://qmcsoftware.github.io/QMCSoftware/) | [GitHub](https://github.com/QMCSoftware/QMCSoftware) | [Read the Docs](https://qmcpy.readthedocs.io/en/latest/) | [PyPI](https://pypi.org/project/qmcpy/) | [Blogs](http://qmcpy.wordpress.com/) | [Contributing](https://github.com/QMCSoftware/QMCSoftware/blob/master/CONTRIBUTING.md) | [Issues](https://github.com/QMCSoftware/QMCSoftware/issues)</center>
[Homepage](https://qmcsoftware.github.io/QMCSoftware/) ~ [GitHub](https://github.com/QMCSoftware/QMCSoftware) ~ [Read the Docs](https://qmcpy.readthedocs.io/en/latest/) ~ [PyPI](https://pypi.org/project/qmcpy/) ~ [Blogs](http://qmcpy.wordpress.com/) ~ [Contributing](https://github.com/QMCSoftware/QMCSoftware/blob/master/CONTRIBUTING.md) ~ [Issues](https://github.com/QMCSoftware/QMCSoftware/issues)

----

Expand Down
58 changes: 30 additions & 28 deletions demos/importance_sampling.ipynb

Large diffs are not rendered by default.

71 changes: 65 additions & 6 deletions demos/integration_examples.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
" n_total 2^(10)\n",
" solution 2.172\n",
" error_bound 0.011\n",
" time_integrate 0.003\n"
" time_integrate 0.002\n"
]
}
],
Expand Down Expand Up @@ -141,7 +141,7 @@
" n_total 885832\n",
" error_bound 0.025\n",
" confid_int [6.245 6.296]\n",
" time_integrate 1.936\n"
" time_integrate 2.002\n"
]
}
],
Expand Down Expand Up @@ -187,7 +187,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Solution: 6.2781 \n",
"Solution: 6.2778 \n",
"AsianOption (Integrand Object)\n",
" volatility 2^(-1)\n",
" call_put call\n",
Expand Down Expand Up @@ -215,11 +215,11 @@
"MeanVarData (AccumulateData Object)\n",
" levels 3\n",
" solution 6.278\n",
" n [1180803. 139711. 15491.]\n",
" n_total 1339077\n",
" n [1181501. 138890. 15497.]\n",
" n_total 1338960\n",
" error_bound 0.026\n",
" confid_int [6.252 6.304]\n",
" time_integrate 0.445\n"
" time_integrate 0.465\n"
]
}
],
Expand All @@ -237,6 +237,65 @@
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Keister Example using Bayesian Cubature\n",
"\n",
"This examples repeats the Keister using cubBayesLatticeG stopping criterion:\n",
"\n",
"* Keister integrand: $y_j = \\pi^{d/2} \\cos(||x_j||_2)$\n",
" \n",
"* Gaussian true measure: $\\mathcal{N}(0,\\frac{1}{2})$\n",
" \n",
"* Lattice discrete distribution: $x_j \\overset{lLD}{\\sim} \\mathcal{U}(0,1)$"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Solution: 2.1683 \n",
"Keister (Integrand Object)\n",
"Lattice (DiscreteDistribution Object)\n",
" dimension 3\n",
" randomize 1\n",
" seed 4048647197\n",
" mimics StdUniform\n",
" backend GAIL\n",
"Gaussian (TrueMeasure Object)\n",
" mean 0\n",
" covariance 2^(-1)\n",
" decomp_type pca\n",
"CubBayesLatticeG (StoppingCriterion Object)\n",
" abs_tol 0.001\n",
" rel_tol 0\n",
" n_init 2^(8)\n",
" n_max 2^(22)\n",
"LDTransformBayesData (AccumulateData Object)\n",
" n_total 8192\n",
" solution 2.168\n",
" error_bound 4.90e-04\n",
" time_integrate 0.128\n"
]
}
],
"source": [
"dimension=3\n",
"abs_tol=.001\n",
"distribution = Lattice(dimension=dimension, backend='GAIL', linear=True)\n",
"measure = Gaussian(distribution, covariance=1./2)\n",
"integrand = Keister(measure)\n",
"solution,data = CubBayesLatticeG(integrand,abs_tol=abs_tol).integrate()\n",
"print(data)"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
10 changes: 5 additions & 5 deletions demos/nei_demo.ipynb

Large diffs are not rendered by default.

314 changes: 235 additions & 79 deletions demos/qei-demo-for-blog.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion demos/qmcpy_intro.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -448,7 +448,7 @@
" n_total 6479\n",
" error_bound 0.010\n",
" confid_int [0.816 0.836]\n",
" time_integrate 0.003\n"
" time_integrate 0.002\n"
]
}
],
Expand Down
8 changes: 4 additions & 4 deletions demos/quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -113,12 +113,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Solution: 1.8081 \n",
"Solution: 1.8082 \n",
"CustomFun (Integrand Object)\n",
"Lattice (DiscreteDistribution Object)\n",
" dimension 2^(1)\n",
" randomize 1\n",
" seed 2287727773\n",
" seed 3898403814\n",
" mimics StdUniform\n",
" backend GAIL\n",
"Gaussian (TrueMeasure Object)\n",
Expand All @@ -133,8 +133,8 @@
"LDTransformData (AccumulateData Object)\n",
" n_total 2^(13)\n",
" solution 1.808\n",
" error_bound 5.04e-04\n",
" time_integrate 0.014\n"
" error_bound 5.03e-04\n",
" time_integrate 0.013\n"
]
}
],
Expand Down
2 changes: 1 addition & 1 deletion demos/sample_scatter_plots.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@
" n_total 125\n",
" error_bound 0.306\n",
" confid_int [1.577 2.189]\n",
" time_integrate 6.66e-04\n"
" time_integrate 7.57e-04\n"
]
}
],
Expand Down
2 changes: 1 addition & 1 deletion demos/sampling_measures.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -416,7 +416,7 @@
" n_total 2^(14)\n",
" solution 0.849\n",
" error_bound 5.14e-04\n",
" time_integrate 0.239\n",
" time_integrate 0.240\n",
"Within tolerance of true value 0.849: True\n"
]
}
Expand Down
1 change: 1 addition & 0 deletions qmcpy/accumulate_data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,6 @@
from .mean_var_data import MeanVarData
from .mean_var_data_rep import MeanVarDataRep
from .ld_transform_data import LDTransformData
from .ld_transform_bayes_data import LDTransformBayesData
from .mlmc_data import MLMCData
from .mlqmc_data import MLQMCData
169 changes: 169 additions & 0 deletions qmcpy/accumulate_data/ld_transform_bayes_data.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
from ._accumulate_data import AccumulateData
from ..integrand import do_period_transform
from ..util import MaxSamplesWarning

from numpy import array, nan
import warnings
import numpy as np


class LDTransformBayesData(AccumulateData):
"""
Update and store transformation data based on low-discrepancy sequences.
See the stopping criterion that utilize this object for references.
"""

parameters = ['n_total', 'solution', 'error_bound']

def __init__(self, stopping_criterion, integrand, m_min: int, m_max: int):
"""
Args:
stopping_criterion (StoppingCriterion): a StoppingCriterion instance
integrand (Integrand): an Integrand instance
m_min (int): initial n == 2^m_min
m_max (int): max n == 2^m_max
"""
# Extract attributes from integrand
self.stopping_criterion = stopping_criterion
self.integrand = integrand
self.measure = self.integrand.measure
self.distribution = self.measure.distribution
self.dim = stopping_criterion.dim

# Set Attributes
self.m_min = m_min
self.m_max = m_max
self.debugEnable = True

self.n_total = 0 # total number of samples generated
self.solution = nan

self.iter = 0
self.m = self.m_min
self.mvec = np.arange(self.m_min, self.m_max + 1, dtype=int)

# Initialize various temporary storage between iterations
self.xpts_ = array([]) # shifted lattice points
self.xun_ = array([]) # un-shifted lattice points
self.ftilde_ = array([]) # fourier transformed integrand values
self.ff = do_period_transform(self.integrand.f,
stopping_criterion.ptransform) # integrand after the periodization transform

super(LDTransformBayesData, self).__init__()

def update_data(self):
""" See abstract method. """
# Generate sample values

if self.iter < len(self.mvec):
self.ftilde_, self.xun_, self.xpts_ = self.iter_fft(self.iter, self.xun_, self.xpts_, self.ftilde_)

self.m = self.mvec[self.iter]
self.iter += 1
# update total samples
self.n_total = 2 ** self.m # updated the total evaluations
else:
warnings.warn(f'Already used maximum allowed sample size {2 ** self.m_max}.'
f' Note that error tolerances may no longer be satisfied',
MaxSamplesWarning)

return self.xun_, self.ftilde_, self.m

# Efficient FFT computation algorithm, avoids recomputing the full fft
def iter_fft(self, iter, xun, xpts, ftilde_prev):
m = self.mvec[iter]
n = 2 ** m

# In every iteration except the first one, "n" number_of_points is doubled,
# but FFT is only computed for the newly added points.
# Previously computed FFT is reused.
if iter == 0:
# In the first iteration compute full FFT
# xun_ = mod(bsxfun( @ times, (0:1 / n:1-1 / n)',self.gen_vec),1)
# xun_ = np.arange(0, 1, 1 / n).reshape((n, 1))
# xun_ = np.mod((xun_ * self.gen_vec), 1)
# xpts_ = np.mod(bsxfun( @ plus, xun_, shift), 1) # shifted

xpts_, xun_ = self.distribution.gen_samples(n_min=0, n_max=n, warn=False, return_non_random=True)

# Compute initial FFT
ftilde_ = np.fft.fft(self.ff(xpts_)) # evaluate integrand's fft
ftilde_ = ftilde_.reshape((n, 1))
else:
# xunnew = np.mod(bsxfun( @ times, (1/n : 2/n : 1-1/n)',self.gen_vec),1)
# xunnew = np.arange(1 / n, 1, 2 / n).reshape((n // 2, 1))
# xunnew = np.mod(xunnew * self.gen_vec, 1)
# xnew = np.mod(bsxfun( @ plus, xunnew, shift), 1)

xnew, xunnew = self.distribution.gen_samples(n_min=n // 2, n_max=n, return_non_random=True)

[xun_, xpts_] = self.merge_pts(xun, xunnew, xpts, xnew, n, self.dim)
mnext = m - 1

# Compute FFT on next set of new points
ftilde_next_new = np.fft.fft(self.ff(xnew))
ftilde_next_new = ftilde_next_new.reshape((n // 2, 1))
if self.debugEnable:
self.alert_msg(ftilde_next_new, 'Nan', 'Inf')

# combine the previous batch and new batch to get FFT on all points
ftilde_ = self.merge_fft(ftilde_prev, ftilde_next_new, mnext)

return ftilde_, xun_, xpts_

# using FFT butefly plot technique merges two halves of fft
@staticmethod
def merge_fft(ftilde_new, ftilde_next_new, mnext):
ftilde_new = np.vstack([ftilde_new, ftilde_next_new])
nl = 2 ** mnext
ptind = np.ndarray(shape=(2 * nl, 1), buffer=np.array([True] * nl + [False] * nl), dtype=bool)
coef = np.exp(-2 * np.pi * 1j * np.ndarray(shape=(nl, 1), buffer=np.arange(0, nl), dtype=int) / (2 * nl))
coefv = np.tile(coef, (1, 1))
evenval = ftilde_new[ptind].reshape((nl, 1))
oddval = ftilde_new[~ptind].reshape((nl, 1))
ftilde_new[ptind] = np.squeeze(evenval + coefv * oddval)
ftilde_new[~ptind] = np.squeeze(evenval - coefv * oddval)
return ftilde_new

# inserts newly generated points with the old set by interleaving them
# xun - unshifted points
@staticmethod
def merge_pts(xun, xunnew, x, xnew, n, d):
temp = np.zeros((n, d))
temp[0::2, :] = xun
temp[1::2, :] = xunnew
xun = temp
temp = np.zeros((n, d))
temp[0::2, :] = x
temp[1::2, :] = xnew
x = temp
return xun, x

# prints debug message if the given variable is Inf, Nan or complex, etc
# Example: alertMsg(x, 'Inf', 'Imag')
# prints if variable 'x' is either Infinite or Imaginary
@staticmethod
def alert_msg(*args):
varargin = args
nargin = len(varargin)
if nargin > 1:
i_start = 0
var_tocheck = varargin[i_start]
i_start = i_start + 1
inpvarname = 'variable'

while i_start < nargin:
var_type = varargin[i_start]
i_start = i_start + 1

if var_type == 'Nan':
if np.any(np.isnan(var_tocheck)):
print(f'{inpvarname} has NaN values')
elif var_type == 'Inf':
if np.any(np.isinf(var_tocheck)):
print(f'{inpvarname} has Inf values')
elif var_type == 'Imag':
if not np.all(np.isreal(var_tocheck)):
print(f'{inpvarname} has complex values')
else:
print('unknown type check requested !')
4 changes: 3 additions & 1 deletion qmcpy/discrete_distribution/c_lib/sobol_qrng.c
Original file line number Diff line number Diff line change
Expand Up @@ -18382,7 +18382,9 @@ static int minit[sobolMaxDim-1][sobolMaxDegree] =
*/
void bintogray(int n1, int size, int *b)
{
int a[size], i = 0;
int *a = (int *) calloc(size, sizeof(int));
int i = 0;

while(n1 != 0) /* converting number to its binary equivalent */
{
/* if(i > size - 1) { */
Expand Down
6 changes: 3 additions & 3 deletions qmcpy/discrete_distribution/c_lib/sobol_seq51.c
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ static int DIRECT[MAXDIM][QP] = {

void sobol_seq51(int n, int d, int skip, double *x){
int m, im, i, j, maskv[d], k;
for(i=skip;i<n;i++){
for(i=skip;i<(n+skip);i++){
m = 0;
for(k=0;k<d;k++){maskv[k] = 0;}
im = i^(i>>1);
Expand All @@ -127,11 +127,11 @@ void sobol_seq51(int n, int d, int skip, double *x){
im = (im>>1);
m = m+1;}
for(j=0;j<d;j++){
x[i*d+j] = maskv[j]*SCALE;}}}
x[(i-skip)*d+j] = maskv[j]*SCALE;}}}

/*
int main(){
int n=8, d=2, skip=0;
int n=8, d=2, skip=8;
double *x = (double *) calloc(n*d, sizeof(double));
sobol_seq51(n, d, skip, x);
for(int i=0; i<n; i++){
Expand Down

0 comments on commit 6c70795

Please sign in to comment.