Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed broadcasting rules for gpflow.models.model.predict_y, partially resolves #1461. #1597

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 6 additions & 2 deletions gpflow/likelihoods/scalar_continuous.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@
from .. import logdensities
from ..base import Parameter
from ..utilities import positive
from ..utilities.ops import eye
from .base import ScalarLikelihood
from .utils import inv_probit


class Gaussian(ScalarLikelihood):
r"""
The Gaussian likelihood is appropriate where uncertainties associated with
Expand Down Expand Up @@ -61,7 +61,11 @@ def _conditional_variance(self, F):
return tf.fill(tf.shape(F), tf.squeeze(self.variance))

def _predict_mean_and_var(self, Fmu, Fvar):
return tf.identity(Fmu), Fvar + self.variance
rank = tf.rank(Fvar).numpy()
if rank == 2:
return tf.identity(Fmu), Fvar + self.variance
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is not related to your PR, but I think we can drop the tf.identitys here, as this is a no-op and looks like legacy code.

else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we, please, have an elif that matches the exact ranks you want to target here, and another else that raises a NotImplementedError for the ones we do not support?

return tf.identity(Fmu), Fvar + eye(Fvar.shape[-1], self.variance)

def _predict_log_density(self, Fmu, Fvar, Y):
return tf.reduce_sum(logdensities.gaussian(Y, Fmu, Fvar + self.variance), axis=-1)
Expand Down
14 changes: 7 additions & 7 deletions gpflow/models/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,14 +211,14 @@ def predict_y(
"""
Compute the mean and variance of the held-out data at the input points.
"""
if full_cov or full_output_cov:
# See https://github.com/GPflow/GPflow/issues/1461
raise NotImplementedError(
"The predict_y method currently supports only the argument values full_cov=False and full_output_cov=False"
)

f_mean, f_var = self.predict_f(Xnew, full_cov=full_cov, full_output_cov=full_output_cov)
return self.likelihood.predict_mean_and_var(f_mean, f_var)

if full_cov and full_output_cov:
f_var_mat = tf.reshape(f_var, [1, f_var.shape[0]*f_var.shape[1], f_var.shape[2]*f_var.shape[3]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice if you could add expected shapes of tensors at the end of the line, for example

f_mean, f_var = ...  # [N, P, N, P] or [N, P, P], etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably want to use tf.shape(tensor) instead of tensor.shape, the former will work in both compiled and eager mode, while the latter only works in eager mode.

f_mean_pred, f_var_pred = self.likelihood.predict_mean_and_var(f_mean, f_var_mat)
return f_mean_pred, tf.reshape(f_var_pred, f_var.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shape comments would be helpful here as well.

else:
return self.likelihood.predict_mean_and_var(f_mean, f_var)

def predict_log_density(
self, data: RegressionData, full_cov: bool = False, full_output_cov: bool = False
Expand Down