Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Exploit Structure in get_fantasy_strategy #2494

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

naefjo
Copy link

@naefjo naefjo commented Mar 11, 2024

Hello :)

This PR is related to #2468 and cornellius-gp/linear_operator#93.

The DefaultPredictionStrategy's get_fantasy_model updates the gram matrix with new datapoints and updates the lik_train_train_covar's root_decomposition and root_inv_decomposition caches by passing them to the constructor. However by using to_dense in lines 214-215, the caches in the __init__ on line 69 and 72 respectively are constructed with root and inv_root of type torch.tensor which in RootLinearOperator.__init__ will assign a DenseLinearOperator to self.root since to_linear_operator defaults to DenseLinearOperator if provided with a torch.tensor.

As a result in LinearOperator.cat_rows, the object E will be of type DenseLinearOperator which in turn will fail the check for for triangular matrices here. This once again leads to a stable_pinverse with QR decomposition instead of exploiting a fast triangular solve to compute the inverse.

@fteufel
Copy link

fteufel commented Mar 13, 2024

Thanks for finding this! I've been confused myself for a few weeks why get_fantasy_model wasn't speeding things up compared to just recomputing caches, but couldn't figure it out.

Can confirm this makes things faster.

DenseLinearOperator expects a torch.tensor so we convert the linear operator to dense.
Directly construct a BatchRepeatLinearOperator from new_root instead of converting it to a dense operator first.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants