Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about obj_scale_loss #13

Open
comvee opened this issue Sep 27, 2022 · 0 comments
Open

Question about obj_scale_loss #13

comvee opened this issue Sep 27, 2022 · 0 comments

Comments

@comvee
Copy link

comvee commented Sep 27, 2022

Hello, thank you for the nice work!
I have a question about obj_scale_loss.
Why do you use different forms of the scale loss in training and validation phase?
Specifically, in the training phase:

obj_scale_loss += self.crit_reg(output['scale'], batch['reg_mask'],
batch['ind'], batch['scale']) / opt.num_stacks

loss = torch.abs(target * mask - pred * mask).sum(dim=(2, 3))

and in the validation phase:
# Calculate relative loss only on validation phase
obj_scale_loss += self.crit_reg(output['scale'], batch['reg_mask'],
batch['ind'], batch['scale'], relative_loss=True) / opt.num_stacks

target_rmzero = target.clone()
target_rmzero[target_rmzero == 0] = 1e-06
loss = torch.abs((1 * mask - pred * mask) / target_rmzero).sum(dim=(2, 3))

torch.abs(target * mask - pred * mask) and torch.abs((1 * mask - pred * mask) / target_rmzero) does not produce same values.
I want to know the meaning of the "relative loss" in the validation phase and why it is only used in the validation phase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant