You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
you present a very interesting and solid work! but I met some implementation error while using your lovasz loss in my deeplab v3+. My initial loss is tf.losses.softmax_cross_entropy, and I prepared:
onehot_labels: [batch_size, num_classes] target one-hot-encoded labels.
logits: [batch_size, num_classes] logits outputs of the network .
as its input, but it just didn't fit in your loss, could you please give some advice about how to transfer original params into your params like probas, labels? Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
Indeed, the loss does not expect a one-hot label encoding, but simply the integer labels that are expected. See this line for the expected inputs.
You may change your data loader to provide integer labelled images, or convert one-hot to integer with e.g. argmax, altough that would be some waste of computation.
hi, Maxim
you present a very interesting and solid work! but I met some implementation error while using your lovasz loss in my deeplab v3+. My initial loss is tf.losses.softmax_cross_entropy, and I prepared:
onehot_labels:
[batch_size, num_classes]
target one-hot-encoded labels.logits:
[batch_size, num_classes]
logits outputs of the network .as its input, but it just didn't fit in your loss, could you please give some advice about how to transfer original params into your params like probas, labels? Thank you!
The text was updated successfully, but these errors were encountered: