You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your work. I noticed something strange with the computation of the KL term and wondered if you can maybe clarify this point.
More particularly the KL term is computed using : alp = E * (1 - label) + 1 (criterions.py line 453)
where E is the evidence. This does not correspond to the formula presented in the paper (Eq 11)
where alp is defined as : alp = alpha * (1-label) + label
Best regards
Benjamin
The text was updated successfully, but these errors were encountered:
Thanks for your interest in this work, you are right, in this formula should be alp = alpha * (1-label) + label. First of all, for the problem of alpha, in the supervision, it is actually effective for both E and alpha supervision, but in practice, I find that direct supervision may be more effective for E. In addition, the problem with +label and +1 is actually similar, because the label or mask area is 1. We will further explain this formula and problem in the journal version. Thank you again for pointing out our problem and your interest in this work.
Hello,
Thanks for sharing your work. I noticed something strange with the computation of the KL term and wondered if you can maybe clarify this point.
More particularly the KL term is computed using :
alp = E * (1 - label) + 1
(criterions.py line 453)where E is the evidence. This does not correspond to the formula presented in the paper (Eq 11)
where alp is defined as :
alp = alpha * (1-label) + label
Best regards
Benjamin
The text was updated successfully, but these errors were encountered: