Generation of initial data
The following code snippet simulates a 1D Ornstein-Ulhenbeck process:
\begin{equation} \mathrm{d}x_t = -μ x_t + \sqrt{2D}\mathrm{d}W_t \end{equation}
Parameters
A discussion about the choice of parameters goes here.
int i;
double t = 0;
double x, dw;
double dt = 0.1;
double mu = 0.0;
double D = 0.5;Simulation code
std::default_random_engine generator;
std::normal_distribution<double> distribution(0.0,1.0);
<<parameters>>
for(i = 0;i<100;i++){
t = i*dt;
dw = distribution(generator);
x = x + (mu-x)*dt + sqrt(2.*D)*dw;
std::cout << t << " " << x << std::endl;
}#+RESULTS[626f07e7bcc73754f37a4bab79e85336ae00595b]: initial_data
| 0 | -0.121966 |
| 0.1 | -1.19659 |
| 0.2 | -0.392639 |
| 0.3 | -1.42856 |
| 0.4 | -1.25244 |
| 0.5 | -0.382359 |
| 0.6 | -0.310517 |
| 0.7 | -0.806102 |
| 0.8 | -0.26296 |
| 0.9 | -0.0359645 |
| 1 | 2.09231 |
| 1.1 | 2.2548 |
| 1.2 | 3.05028 |
| 1.3 | 3.52866 |
| 1.4 | 3.04795 |
| 1.5 | 2.40635 |
| 1.6 | 1.47856 |
| 1.7 | 2.63912 |
| 1.8 | 1.43937 |
| 1.9 | -1.37694 |
| 2 | -2.01198 |
| 2.1 | -2.33235 |
| 2.2 | -4.1144 |
| 2.3 | -3.65259 |
| 2.4 | -4.18196 |
| 2.5 | -3.77693 |
| 2.6 | -3.58392 |
| 2.7 | -3.90498 |
| 2.8 | -3.58822 |
| 2.9 | -2.37115 |
| 3 | -1.75713 |
| 3.1 | -2.33068 |
| 3.2 | -0.695309 |
| 3.3 | 0.180122 |
| 3.4 | -0.173122 |
| 3.5 | 0.516759 |
| 3.6 | 1.67661 |
| 3.7 | 1.55543 |
| 3.8 | 1.11477 |
| 3.9 | 2.15416 |
| 4 | 0.624469 |
| 4.1 | 0.43094 |
| 4.2 | -0.0831131 |
| 4.3 | -0.0025312 |
| 4.4 | 1.67099 |
| 4.5 | 1.49365 |
| 4.6 | 0.938674 |
| 4.7 | 0.52955 |
| 4.8 | -0.130365 |
| 4.9 | -1.71365 |
| 5 | -2.53041 |
| 5.1 | -1.48023 |
| 5.2 | -0.227506 |
| 5.3 | -1.53032 |
| 5.4 | -2.34904 |
| 5.5 | -0.544567 |
| 5.6 | -0.331371 |
| 5.7 | -0.460139 |
| 5.8 | 1.00405 |
| 5.9 | -1.09 |
| 6 | -0.413041 |
| 6.1 | -0.385884 |
| 6.2 | 0.168305 |
| 6.3 | -1.04607 |
| 6.4 | 0.175285 |
| 6.5 | 0.715089 |
| 6.6 | -0.792143 |
| 6.7 | -0.826955 |
| 6.8 | 0.196746 |
| 6.9 | -0.0417706 |
| 7 | -0.455498 |
| 7.1 | -1.45828 |
| 7.2 | -1.44729 |
| 7.3 | -2.40351 |
| 7.4 | -1.41705 |
| 7.5 | -1.35779 |
| 7.6 | -1.12616 |
| 7.7 | -1.08029 |
| 7.8 | -0.953417 |
| 7.9 | 0.283667 |
| 8 | 0.0974505 |
| 8.1 | 1.41539 |
| 8.2 | 2.37602 |
| 8.3 | 2.58899 |
| 8.4 | 2.4627 |
| 8.5 | 2.27544 |
| 8.6 | 2.17481 |
| 8.7 | 0.10489 |
| 8.8 | 1.085 |
| 8.9 | 0.750345 |
| 9 | 1.60538 |
| 9.1 | 2.43575 |
| 9.2 | 1.45027 |
| 9.3 | 1.34788 |
| 9.4 | 3.21787 |
| 9.5 | 2.77432 |
| 9.6 | 3.2727 |
| 9.7 | 0.0888596 |
| 9.8 | 1.42529 |
| 9.9 | 1.01833 |
Visualising the timeseries
Let’s use python to plot the timeseries, by fetching the data generated by the C++ code block:
import numpy as np
import matplotlib.pyplot as plt
timeseries = np.array(timeseries)
plt.plot(timeseries[:,0], timeseries[:,1])
plt.savefig("timeseries_vis.png")
return "timeseries_vis.png"Statistics!
from numpy import array, mean
values = array(x)[:,1]
return mean(values)In here we discuss the fact that the empirical average over the timeseries is call_mean(initial_data), and how interesting it is.