Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Typo in the Eq 5? #2

Closed
jahaniam opened this issue Aug 29, 2019 · 8 comments
Closed

Typo in the Eq 5? #2

jahaniam opened this issue Aug 29, 2019 · 8 comments

Comments

@jahaniam
Copy link

Dear Authur,

Thank you for sharing your codes.

I have a question about equation 5:

Screenshot from 2019-08-28 19-53-29

Do you meant h= beta x logy - beta x log y* ?
The way you described in the paper, betas are going to cancel out each other and have no effect!

Please fix me if I am wrong.
Thanks,
Ali

@cogaplex-bts
Copy link
Collaborator

You are right.
By math, they will simply be cancelled due to the logarithms.
That ugly trick is just our empirical choice and will be improved in next release.
In fact, we are preparing new release due to inconsistency (small difference in performance) between currently relased model and the paper.
Thanks for kindly pointing this. =)

@jahaniam
Copy link
Author

Thanks for the code. I like the idea:) :)

Now that I'm looking at the code there is no beta implemented in the code. I was wondering if you removed beta in this provided code. I'm trying to reproduce your result that you submitted to the benchmark. Please let me know if something is missing for that.
I was also wondering if the model provided for eigen is the same model as the one you submitted to KITTI benchmark? If not can you mention the changes you did for the model you submitted to the benchmark ?

If you don't mind here is some inconsistencies I noticed if you wanna edit your paper:
In default parameters for eigen split(i.e. batch size and rotate degree)

and also I think n3 should be passed to a separate sigmoid (not tanh) and then n1,n2 (after tanh) and n3 (after sigmoid) be passed to L2 normalization. Nice trick to avoid minus of the normals btw :) You just need to update the figure I think.

image

Thanks,
Ali

@jahaniam jahaniam reopened this Sep 11, 2019
@cogaplex-bts
Copy link
Collaborator

Hello, nice to hearing from you again. =)
As I mentioned before, I updated the code not to include \beta in the training loss, and updated the model file as well trained with the modified loss function.
As a matter of fact, the model submitted to the KITTI online benchmark is different from trained with Eigen split. The benchmark version was trained with sampled examples (about 57000 rgb and gt pairs) from whole dataset, but parameters are almost same: batch_size 16, num_gpus 4, num_epochs 50, input_height 352, input_width 704, do_kb_crop, do_random_rotate, degree 1.0.
The model for the Eigen split is trained with the parameters in "arguments_train_eigen.txt", only batch_size and num_gpus are different (actually they were 16 and 4, respectively) because I assume most of users will use our code with only one gpu.
Finally, I have to say that this work is still in progress.
I mean, before publication (not a preprint in arXiv) our code and paper will be updated.
This is why we made our code and paper publicly available, this kind of very constructive review from you! =)
Thanks,
Jin Han

@cogaplex-bts
Copy link
Collaborator

Closing this issue due to inactivity

@luohongcheng
Copy link

Hi, is the model submitted to the KITTI online benchmark trained on the official " annotated depth maps data set (14 GB)", which only contains 43000 rgb images.

@cogaplex-bts
Copy link
Collaborator

Hi, is the model submitted to the KITTI online benchmark trained on the official " annotated depth maps data set (14 GB)", which only contains 43000 rgb images.

@cogaplex-bts
Copy link
Collaborator

@luohongcheng Yes. The submitted model was finetued using only the official ground truth depth maps with the backbone network pretrained using ImageNet.

@luohongcheng
Copy link

Would you please share your file lists selected from the official training set? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants