You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 19, 2020. It is now read-only.
I am curious about the loss during training: what's the typical loss when it converges in your work (I think you use 256X256 size image for training?)?
Many thanks.
Jianyu
The text was updated successfully, but these errors were encountered:
Thanks @huangzehao .
What's the gray level of your input image? is it [0,1] or [0,255]?
if my input is between [0,1], since the EuclideanLoss E = 1/2N×sum((y-y')^2), shall I expect the max loss to be 0.5?
Thanks. I am using similar networks, with EuclideanLoss layer. My input is [0,255], but I stretch that into [0,1], using
transform_param {
scale: 0.00390625
}
In that way, I end up with loss ~1k when convergence, with fixed learning rate, without fine-tuning. The output during testing still seems to make sense. So I wonder is this 1k loss I got the result of 256*Euclidean loss?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am curious about the loss during training: what's the typical loss when it converges in your work (I think you use 256X256 size image for training?)?
Many thanks.
Jianyu
The text was updated successfully, but these errors were encountered: