Skip to content

0.0.5

Compare
Choose a tag to compare
@ziatdinovmax ziatdinovmax released this 08 Sep 19:52
· 537 commits to main since this release
  • Allow specifying a CPU or GPU device on which one wants to perform training/prediction as a keyword argument (device). This could be useful in small data regimes where the model inference with NUTS runs faster on the CPU, but the computation of predictive means and variances is faster on GPU. Example:
# Specify devices for training and prediction
device_train=jax.devices("cpu")[0]  # training on CPU
device_predict = jax.devices("gpu")[0]  # prediction on GPU
# Initialize model
gp_model = gpax.ExactGP(input_dim=1, kernel='Matern')
# Run HMC with iterative No-U-turn-sampler on CPU to infer GP model parameters
gp_model.fit(rng_key, X, y, device=device_train)  # X and y are small arrays 
# Make a prediction on new inputs using GPU
y_pred, y_sampled = gp_model.predict(rng_key_predict, X_new, device=device_predict)
  • Add utility function to visualize numpyro's distributions. Example:
import numpyro
d = numpyro.distributions.Gamma(2, 5)
gpax.utils.dviz(d, samples=10000)

image

  • Add the option to pass a custom jitter value (a small positive term added to the diagonal part of a covariance matrix) for better numerical stability to all models. Example:
gp_model = gpax.ExactGP(input_dim=1, kernel='Matern')
gp_model.fit(rng_key, X, y, jitter=1e-5)
y_pred, y_sampled = gp_model.predict(rng_key_predict, X_new, jitter=1e-5)
  • Add an example on Bayesian optimization and expand descriptions in markdown cells for the existing examples
  • Improve documentation