You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper Learning to Adapt Structured Output Space for Semantic Segmentation loss_adv is designed to train the segmentation network and fool the discriminator by maximizing the probability of the target prediction being considered as the source prediction, you mean we should maximize the loss_adv for training segmentation network,right?if then, how about you maximizing the loss_adv in your code?
`pred_target1, pred_target2 = model(images)
pred_target1 = interp_target(pred_target1)
pred_target2 = interp_target(pred_target2)
loss_adv_target2 = bce_loss(D_out2,Variable(torch.FloatTensor(D_out2.data.size()).fill_(source_label)).cuda(args.gpu))
loss = args.lambda_adv_target1 * loss_adv_target1 + args.lambda_adv_target2 * loss_adv_target2
loss = loss / args.iter_size
loss.backward()`
these codes is about loss_adv in you project.
thank you very much!
The text was updated successfully, but these errors were encountered:
loss_adv is the loss that a target sample is classified as the source, so the segmentation network should minimize this loss to correctly confuse the discriminator (discriminator is already learning to correctly classify target and source samples.)
In your paper Learning to Adapt Structured Output Space for Semantic Segmentation loss_adv is designed to train the segmentation network and fool the discriminator by maximizing the probability of the target prediction being considered as the source prediction, you mean we should maximize the loss_adv for training segmentation network,right?if then, how about you maximizing the loss_adv in your code?
`pred_target1, pred_target2 = model(images)
pred_target1 = interp_target(pred_target1)
pred_target2 = interp_target(pred_target2)
D_out1 = model_D1(F.softmax(pred_target1))
D_out2 = model_D2(F.softmax(pred_target2))
loss_adv_target1 = bce_loss(D_out1,Variable(torch.FloatTensor(D_out1.data.size()).fill_(source_label)).cuda(args.gpu))
loss_adv_target2 = bce_loss(D_out2,Variable(torch.FloatTensor(D_out2.data.size()).fill_(source_label)).cuda(args.gpu))
loss = args.lambda_adv_target1 * loss_adv_target1 + args.lambda_adv_target2 * loss_adv_target2
loss = loss / args.iter_size
loss.backward()`
these codes is about loss_adv in you project.
thank you very much!
The text was updated successfully, but these errors were encountered: