Skip to content


Here are 324 public repositories matching this topic...

st-- commented Mar 18, 2020

We would like all GPflow kernels to broadcast across leading dimensions. For most of them, this is implemented already (#1308); this issue is to keep track of the ones that currently don't:

  • ArcCosine
  • Coregion
  • Periodic
  • ChangePoints
  • Convolutional
  • all MultioutputKernel subclasses
pilchat commented Jun 20, 2018


I'm trying to use the RQ kernel. First of all, in its help file I don't understand if I should put alpha or log_alpha as an input parameter. If the latter, is the logarithm natural or base 10?

Second of all, I checked R&W, and their formula 4.19 is not exactly the same as the one in george. Is it an issue, or are they mathematically equivalent?

Thanks in advance


Irene-GM commented Jul 16, 2019


I am interested in using pykrige.ok.OrdinaryKriging and in many different set ups with respect to the accepted parameters. Thus, it makes sense to use GridSearchCV to define a big dictionary with all the parameters and let the library do the rest. However, it seems that GridSearchCV is coupled to the RegressionKriging package and not accepting parameters fr

v-pourahmadi commented Jun 2, 2019


Thanks for sharing the code.

Is there ant simple "getting started" document on setting up like a basic EI or KG with your tool.
1- I have seen the in the "example" folder, it is good but it is for a very complete case, not for simple sampling mode.
2- I have tried the MOE getting started document (available in the "doc" folder) but as the function names and procedures are chan

micahjsmith commented Feb 13, 2019

Create the following documentation pages:

  • getting started: intro
  • getting started: installation
  • getting started: rename basic usage -> quickstart, add a bit on hyperparameters
  • advanced usage: basic concepts
  • advanced usage: hyperparameters
  • advanced usage: tuners -> extend this in depth
  • advanced usage: selectors -> extend this in depth

Reorganize tree, see MLBlocks.

See als

willtebbutt commented Oct 19, 2019

There are a variety of interesting optimisations that can be performed on kernels of the form

k(x, z) = w_1 * k_1(x, z) + w_2 * k_2(x, z) + ... + w_L k_L(x, z)

A naive recursive implementation in terms of the current Sum and Scaled kernels hides opportunities for parallelism in the computation of each term, and the summation over terms.

Notable examples of kernels with th

Improve this page

Add a description, image, and links to the gaussian-processes topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the gaussian-processes topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.