Skip to content
This repository has been archived by the owner on Aug 19, 2020. It is now read-only.

loss during training #18

Closed
xjtuljy opened this issue Nov 29, 2016 · 4 comments
Closed

loss during training #18

xjtuljy opened this issue Nov 29, 2016 · 4 comments

Comments

@xjtuljy
Copy link

xjtuljy commented Nov 29, 2016

I am curious about the loss during training: what's the typical loss when it converges in your work (I think you use 256X256 size image for training?)?

Many thanks.

Jianyu

@huangzehao
Copy link
Owner

x2 loss: 0.16~0.17
training sample size 41x41

@xjtuljy
Copy link
Author

xjtuljy commented Dec 1, 2016

Thanks @huangzehao .
What's the gray level of your input image? is it [0,1] or [0,255]?
if my input is between [0,1], since the EuclideanLoss E = 1/2N×sum((y-y')^2), shall I expect the max loss to be 0.5?

Jianyu

@huangzehao
Copy link
Owner

Hi, the gray level of input image is [0,1].
The max loss is not 0.5, since the output of the network is not limit to [0,1].

@xjtuljy
Copy link
Author

xjtuljy commented Dec 6, 2016

Thanks. I am using similar networks, with EuclideanLoss layer. My input is [0,255], but I stretch that into [0,1], using
transform_param {
scale: 0.00390625
}
In that way, I end up with loss ~1k when convergence, with fixed learning rate, without fine-tuning. The output during testing still seems to make sense. So I wonder is this 1k loss I got the result of 256*Euclidean loss?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants