Skip to content

0.3.2

Latest
Compare
Choose a tag to compare
@sdatkinson sdatkinson released this 04 Nov 04:41

gptorch 0.3

Change log

0.3.0

Changes breaking backward compatibility:

  • GPR, VFE, SVGP: training inputs order is changed from (y, x) to (x, y) on
    model __init__()s.
  • .predict() functions return the same type as the inputs provided
    (numpy.ndarray->numpy.ndarray, torch.Tensor->torch.Tensor)
  • Remove util.as_variable()
  • Remove util.tensor_type()
  • Remove util.KL_Gaussian()
  • Remove util.gammaln()
  • GPModel method .loss() generally replaces .compute_loss().
  • .compute_loss() methods in models generally renamed to .log_likelihood()
    and signs flipped to reflect the fact that the loss is generally the negative
    LL.

Changes not breaking backward compatibility:

  • GPR, VFE: Allow specifying training set on .compute_loss() with x, y kwargs
  • GPR, VFE: Allow specifying training inputs on ._predict() with x kwarg
  • GPU supported with .cuda()
  • Remove GPModel.evaluate()
  • Don't print inducing inputs on sparse GP initialization
  • Suport for priors in gptorch.model.Models

0.3.1

  • Fix some places where .compute_loss() wasn't replaced, causing GPModel.optimize() not to work.

0.3.2

  • Issue 20 related to installing gptorch on top of pip-installed versions of PyTorch with non-standard device configurations.
  • Issue 22 where importing gptorch changes the default dtype in PyTorch from single- to double-precision.
  • Added gptorch.__version__

Authors