Fix bug in setting training data for ExactGP if training inputs are None #565
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
It appears valid for a GP to have its training inputs/labels set to
None
. However, if you callset_train_data
when the current training inputs areNone
, you'll get an error atfor input, t_input in zip(inputs, self.train_inputs)
(becauseNone
is not iterable and thus can't bezip
ped). This PR moves the check forif strict
up so that, ifstrict == False
, the entire checking section is skipped and it doesn't matter whether the training data wasNone
or not.When
strict == True
, the error messages have been improved a bit. Now they'll say both what the expected attribute (e.g. shape) was and what shape was found. To get a more informative error, I also ensured that if the training inputs/targets areNone
, the shape, etc. will be viewed asNone
so you'll still get the expectedRuntimeError
(instead of an error atzip
that is harder to diagnose).This passed
flake8
fine, and I think the only errors I got in the tests were for CUDA out of memory.