-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
problem with training for loss #18
Comments
@iWeisskohl I meet the same question, have you solved this problem? |
@Hwang64 Not yet..... there might be some problem with batch size , but I can't figure it out with the offered code. |
@iWeisskohl , refer to the tensorflow version, the learning rate should be set to 0.0001 but not 0.01 and I get the accuracy which is almost the same as the paper illustrated, you may have a try. |
@Hwang64 Thanks so much.I got the almost same results. But I have one more question, have u ever pay attention about the training data and test data? It seems the offered data is not as same size as it reported in the paper....it only contains about 3000 training and 800 testing,but it supposed to have the same size with http://3dshapenets.cs.princeton.edu/ ModelNet40.zip |
actually i meet the same problem and i find that through modify the batch_size this issue can be solved well |
Thanks all for contributing to the solution! |
HI@ @suhangpro ,
Thanks for your code. But when I run this code,I only got loss=87.33. I have checked the label and input ,they are right. I am wondering is there some problem with the batch size in data layer when your definite it ? Because I try to set batch_size as 32, but it meets some problem.
The text was updated successfully, but these errors were encountered: