You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to understand the alpha regularization term and I've found what I believe to be either a bug or a discrepancy between your paper and the code.
The alpha regularization is defined in the paper as:
However in the code you use the alpha_composite rather than the individual alpha layers. This has different behaviour when multiple layers have some alpha activation.
Below is the value of the regularizations plotted against two 1 pixel alpha layers. green is using alpha_composite, blue is using the l1 norm as defined in the paper.
Is there a reason for this change?
Furthermore, this loss is aimed to prevent a trivial solution where one layer reconstructs the entire image. I've encountered a situation where one object is reconstructed in multiple object layers. Have you ever encountered this problem? And if so, how would you deal with this?
Thanks in advance
The text was updated successfully, but these errors were encountered:
Hi, good catch, the regularization should be applied to the alpha composite, as it is in the code and not the paper. This reason for this is to allow for disocclusion to be performed. Unfortunately, this can also enable an object to be reconstructed in multiple object layers, as in your case. Perhaps you could try experimenting with applying the regularization to each alpha layer independently rather than the composite.
Hi Erika, thanks for sharing this nice work.
I'm trying to understand the alpha regularization term and I've found what I believe to be either a bug or a discrepancy between your paper and the code.
The alpha regularization is defined in the paper as:
However in the code you use the
alpha_composite
rather than the individual alpha layers. This has different behaviour when multiple layers have some alpha activation.Below is the value of the regularizations plotted against two 1 pixel alpha layers. green is using
alpha_composite
, blue is using the l1 norm as defined in the paper.Is there a reason for this change?
Furthermore, this loss is aimed to prevent a trivial solution where one layer reconstructs the entire image. I've encountered a situation where one object is reconstructed in multiple object layers. Have you ever encountered this problem? And if so, how would you deal with this?
Thanks in advance
The text was updated successfully, but these errors were encountered: