-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confused about how to optimize #3
Comments
Dear @UpCoder, No problem. Thank you for your interest.
|
Thank you for your reply. |
Dear @UpCoder, What you said is exactly right. The use of the word “unsupervised” causes a fair amount of confusion and also some well founded doubts. By today’s generally accepted definition of “unsupervised”—not using GT labels during training—this may be considered a misuse of a word. But the usage of the word in the sense that you just described I believe is originated from the once canonical setting in VOS, which uses human input as supervision to guide the algorithm at test time, as at those times “supervision” at test time was fairly common. |
OK, got it. Thank you! |
@yz93 Hi, for the training step, do you compute the loss for the anchor image, which is not mentioned in the paper? |
@mingminzhen No. The loss is binary cross-entropy with logits on the output of the network with GT binary labels. |
@yz93 Hi, for the training step, what is the scale range for randomly resize? If possible, can you provide the data augmentation code? |
Hi, I am confused about how to optimize.
1, what's the loss function? in this paper, it seems not to be mentioned. is it just BCE?
2, what's the ground truth?
Thank you!
The text was updated successfully, but these errors were encountered: