Skip to content
A variational method for fast, approximate inference for stochastic differential equations.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
lotka-volterra Fixed #3 May 19, 2018

Black-Box Variational Inference for Stochastic Differential Equations

Tensorflow implementation of the Lotka-Volterra example detailed in Black-box Variational Inference for Stochastic Differential Equations (ICML, 2018), by Tom Ryder, Andy Golightly, Stephen McGough and Dennis Prangle.

Example: Lotka-volterra

Here we demonstrate the implementation of example "multiple observation times with unknown parameters" in section 5.1 of the paper. That is, full parameter inference for a two-dimensional Lotka-Volterra SDE, with known variance of the measurement error, observed a discrete time-steps of 10.

System Requirments

The following example was tested using tensorflow 1.5, numpy 1.14 and python 3. It has not been rigorously tested on newer and/or later versions of any of the dependencies. For any related questions, please see the contact section.

This example additionally makes use of tensorboard (1.5) to visualise the training. As such, you should specify the path for your tensorboard output in For example:

PATH_TO_TENSORBOARD_OUTPUT = "~/Documents/my_cool_model/train/"

and then launch tensorboard using:

tensorboard --logdir=~/Documents/my_cool_model/train/

Note that the parameter posteriors in tensorboard are parameterised using log-normals.

Running the Example

This example assumes a known, constant known variance of the measurement error (you can change the value in the data file, i.e. 'TAU') and attempts to learn:

  • The latent diffusion process.
  • The parameters in the description of the SDE.

After entering the observations, observation times, the discretization, variance of the measurement error and and specifying the dimensions of the network (i.e. the number of layers and the number of nodes in each of those layers), we can then run the experiment using:


Note that the model will infrequently produce an error relating to the Cholesky decomposition used in This typically happens early in training when the network has a tendency to produce ill-conditioned matrices leading to numerical instability. Should it, however, become a persistent issue (under the current settings it should not), you should increase the value of "eps_identity" in the function "rnn_cell" of


By saving the paths produced in training (not something the model will presently do by default), we can watch the model learn the latent diffusion process:


Should you have any queries or suggestions (all welcome), you should contact either:

You can’t perform that action at this time.