You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be nice if we could store the KL divergence value as an attribute of the Bayesian Layers and return them on the forward method only if needed.
With that we can have less friction on integration with PyTorch. being able to "plug and play" with bayesian-torch layers on deterministic models.
We then can get it from the bayesian layers when calculating the loss with no harm or hard changes to the code, which might encourage users to try the lib.