New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to do BP for occlusion mask penalty? #10
Comments
Hi! looking at this again, the occlusion flag should be constant w.r.t. backpropagation, and the penalty should have no effect on backprop. |
@simonmeister Thanks for your reply |
In Table 2, we don't directly look at the influence of the penalty, but of the occlusion handling mechanism (which the penalty is part of). At the time of writing, we included the penalty, but it may really be the case that the pre-training without occlusion handling helps avoiding the trivial solution. |
Thanks @simonmeister Anyway, regardless of the issue of the penalty, this work is very fancy.:D |
Hi @simonmeister , so how can the network avoid converging to trivial solution, if the penalty term cannot backprop? |
Hi @Yuliang-Zou, |
Hi, @simonmeister
Thanks for your nice work and repo.
I've read your paper and there is one thing that I do not understand.
In equation (2) occlusion mask
o_x
is penalized with weightlambda_p
. However in equation (1) this mask is calculated by comparision. How to do back propagation for this penalty term?Thanks~
The text was updated successfully, but these errors were encountered: