You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In fact, when doing the above, the predictions are slower than with predict_y_compiled():
mu, sigma = model.predict_y_compiled(X_test)
I'm not very experienced with saving models through tf. Is there a more correct way that would yield a faster prediction when loading my model?
Thanks
Update:
The function predict_f_compiled() actually is much faster, surprisingly it's the call to likelihood.predict_mean_and_var_compiled() that seems to take up most of the processing time.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I've recently learned that I can obtain faster predictions with an SVGP using:
Instead of:
However I'm not succeeding to obtain the same improvement in speed on a model saved to disk, when comparing:
against:
being used in an equivalent manner as in the original model, that is:
In fact, when doing the above, the predictions are slower than with predict_y_compiled():
I'm not very experienced with saving models through tf. Is there a more correct way that would yield a faster prediction when loading my model?
Thanks
Update:
The function predict_f_compiled() actually is much faster, surprisingly it's the call to likelihood.predict_mean_and_var_compiled() that seems to take up most of the processing time.
Beta Was this translation helpful? Give feedback.
All reactions