New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Restarting data prefetching from start" repeated many times one by one. why? is it wrong... #1833
Comments
In general it means that you don't have enough training data and it has to use the data over and over again. For your specific case there's not enough information to determine the reason. |
I0213 15:45:11.502871 2555 net.cpp:652] Copying source layer loss |
Please ask usage questions on caffe-users -- the issues tracker is primarily for Caffe development discussion. |
hello,have you solved this problem?I have met the same problem.my net is also Alexnet ,my train set is 50000 and my test set is 15000.I have tried to ruduce the batch_size to 32,but it seems useless. |
"Restarting data prefetching from start" , well this says that at this point all of your training/validation data has seen by the network. Now it will start looking from the beginning. It means your one epoch is complete. |
The backend is LMDB, and source image type is '.tif', batchsize is 32 using alexnet. The train set is 1680, and test is 420.
The text was updated successfully, but these errors were encountered: