You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the excellent work.
In your paper, you stated that "by stopping gradients from propagating through the targets of our loss, we get significantly worse performance – in fact, the optimizer does not manage to pull down the cross-entropy of any of the learned representations z(s) significantly".
What do you mean by stopping gradients? do you have any method to force propagate the gradients?
I build my own model, the network is able to learn the RGB scale but has high losses on lower scales.
The text was updated successfully, but these errors were encountered:
Thank you for the excellent work.
In your paper, you stated that "by stopping gradients from propagating through the targets of our loss, we get significantly worse performance – in fact, the optimizer does not manage to pull down the cross-entropy of any of the learned representations z(s) significantly".
What do you mean by stopping gradients? do you have any method to force propagate the gradients?
I build my own model, the network is able to learn the RGB scale but has high losses on lower scales.
The text was updated successfully, but these errors were encountered: