-
Notifications
You must be signed in to change notification settings - Fork 6
difference between regression_var
, regression_covar
and regression_covar_balanced
#2
Comments
In practice, the regression is performed through estimating the LDL decomposition of the covariance matrix. To do that, I estimate a lower triangular matrix L and a diagonal matrix D. The final form with fro norm is a mathematical approximation of the required loss function to learn a full covariance matrix. I am still working on a stable version without a fro norm, where the full covariance can be learned without that approximation. The negative penalization |
@asharakeh thanks! Could you post the |
@patrick-llgc here is the loss. You just need to extend the if statement and it should work as intended.
` |
Hey @asharakeh , |
@gunshi that was my own derivation and approximation, and one of the contributions. The full formulation did not fit into the paper so it wasn't included. I have a short supplementary pdf outlining the derivation if you are interested, send me an email and i can forward that to you. |
Hi @asharakeh , thanks for releasing your code! I was just reading your paper during the weekend. Congratulations on the great work.
I have a question regarding the difference in
regression_var
andregression_covar
. Inregression_covar
, You seem to replace the diagonal elements of covar prediction with 1, then use the l2 norm of the updated covar matrix to normalize the first term of the uncertainty aware l2 loss (eq 3 in the original paper).https://github.com/asharakeh/bayes-od-rc/blob/master/src/retina_net/models/retinanet_model.py#L292
I do not quite understand the difference between the two methods -- it would be great if you could shed some insights onto this matter.
Also, the penalization of negative anchors in eq 8 of the original paper seems not implemented yet? Is this the (unimplemented)
regression_covar_balanced
about? Thanks for your help!The text was updated successfully, but these errors were encountered: