New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
occ loss implementation problem #8
Comments
this seems working: fw_occ_bound_margin = length_sq(flow_diff_fw) - occ_thresh
bw_occ_bound_margin = length_sq(flow_diff_bw) - occ_thresh
fw_occ_bound_margin = (tf.sign(fw_occ_bound_margin) + 1.0) / 2 * fw_occ_bound_margin
bw_occ_bound_margin = (tf.sign(bw_occ_bound_margin) + 1.0) / 2 * bw_occ_bound_margin
losses['occ'] = (charbonnier_loss(fw_occ_bound_margin, border_fw * fb_occ_fw) +
charbonnier_loss(bw_occ_bound_margin, border_bw * fb_occ_bw)) where |
Hi! Thanks for noticing this. I have to look into this in more detail in the next days. Did you try if your correction changes training outcomes? |
I haven't got the time to fully run the training yet. value of occ loss is much smaller this way; maybe some weight tuning is required to get good results. |
by the way, @simonmeister how many minibatches are required to achieve AEE(All) 3.78 in Table. 3 for UnFlow-C? |
On which dataset did you train? Did you pre-train on synthia? If you pre-trained first, 400K iterations on KITTI should get you close to our result. We used the same config settings as in the config.ini train_* sections. |
no pre-train to my believe, just So to reproduce your result I should first pre-train on synthia with supervised method, then train on kitti_raw with unsupervised loss? |
The pre-training on synthia is also unsupervised. Yes, you can just use the default config with dataset = synthia first and then use dataset = kitti and finetune = NAME-OF-SYNTHIA-EXPERIMENT. |
Great, thanks! |
Hi, @simonmeister |
ok, thanks! It seems that maybe this regularization term is not crucial, or a better form should be derived. |
I am closing this as it seems to be redundant with the discussion in #10. |
Hi,
in
losses.py
, forward-backward occlusion loss is implemented as:However, gradients cannot BP through the
tf.greater
operation:would output
[None]
.Does that means the occ loss is not working?
correct me if anything wrong,
Thanks
The text was updated successfully, but these errors were encountered: