Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss function for dnorm in predcls #8

Closed
jkli-aa opened this issue Apr 29, 2022 · 3 comments
Closed

Loss function for dnorm in predcls #8

jkli-aa opened this issue Apr 29, 2022 · 3 comments

Comments

@jkli-aa
Copy link

jkli-aa commented Apr 29, 2022

image

@jkli-aa
Copy link
Author

jkli-aa commented Apr 30, 2022

I got it. The change for $(FG+BG)/FG$ will enlarge the loss of large graph.
Sorry to bother you.

@jkli-aa jkli-aa closed this as completed Apr 30, 2022
@bknyaz
Copy link
Owner

bknyaz commented May 2, 2022

Hi, I'm glad you've resoved this issue. You are right in your last comment, but I just wanted to add that we tested the variant when the edge loss weight is fixed for all batches (equivalent to adjusting the learning rate as you mentioned). Such a variant also worked well in some cases (see the paper for details). The weight of our edge loss adapts to each batch of images/scene graphs and does not require tuning for a new dataset (where scene graphs may have different sparsity). So these properties of our loss can be helpful in practice.

@jkli-aa
Copy link
Author

jkli-aa commented May 2, 2022

It really addressed my concerns. Thank you for your kind reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants