Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do BP for occlusion mask penalty? #10

Closed
EthanZhangYi opened this issue Jan 22, 2018 · 6 comments
Closed

How to do BP for occlusion mask penalty? #10

EthanZhangYi opened this issue Jan 22, 2018 · 6 comments
Labels

Comments

@EthanZhangYi
Copy link

Hi, @simonmeister
Thanks for your nice work and repo.
I've read your paper and there is one thing that I do not understand.
In equation (2) occlusion mask o_x is penalized with weight lambda_p. However in equation (1) this mask is calculated by comparision. How to do back propagation for this penalty term?
image
image

Thanks~

@simonmeister
Copy link
Owner

Hi! looking at this again, the occlusion flag should be constant w.r.t. backpropagation, and the penalty should have no effect on backprop.

@EthanZhangYi
Copy link
Author

@simonmeister Thanks for your reply
If the penalty has no effect on backprop, what is lambda_p used for in equation (2)? In Table2, this penalty term really make a difference.
Another question is how to avoid the trivial solution where all pixels are occluded when the penalty have no effect on backprop? It is mentioned in the paper for the equation (2).
Disabling occlusion handling and forward-backward consistency for SYNTHIA pretrain can avoid this trivial solution?

@simonmeister
Copy link
Owner

simonmeister commented Jan 31, 2018

In Table 2, we don't directly look at the influence of the penalty, but of the occlusion handling mechanism (which the penalty is part of). At the time of writing, we included the penalty, but it may really be the case that the pre-training without occlusion handling helps avoiding the trivial solution.

@EthanZhangYi
Copy link
Author

EthanZhangYi commented Feb 1, 2018

Thanks @simonmeister
I think it is reasonable that the pre-training without occlusion handling helps avoiding the trivial solution. Maybe you can try to pretrain your model on SYNTHIA dataset with occlusion handling to see if it is the case. Another solution is to find a differentiable function to approximate inequation(1) in the paper, just like the approximation for the ternary census transform.

Anyway, regardless of the issue of the penalty, this work is very fancy.:D

@Yuliang-Zou
Copy link

Hi @simonmeister , so how can the network avoid converging to trivial solution, if the penalty term cannot backprop?

@simonmeister
Copy link
Owner

Hi @Yuliang-Zou,
Currently, we believe this is due to the pre-training without occlusion handling on SYNTHIA. We will try to train without this pre-training to see if this is the case soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants