-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on Custom Data Training #12
Comments
2)&3) You also need to notice that the test_iter = 8 in my solver.prototxt. So every iteration consists 8 forward/backward operations. But it's still seems too slow for your experiments, what's your hardward configuration? And for the number of training iters, I believe you should try which is the best training iters by hand.
|
Hi @suhalim9, have you solved your problem, seems like we have similar problem. I also want to train MNC using my own dataset and the dataset only has one type of object. If you have solved your problem, could you share some experience please? |
Hi @wuzheng-sjtu , But I still have a problem with the accuracy. My object type is very simple without not many visible features (not like face, animal, etc.) So I wondered if my issue was because of the nature of the object. In summary, I didn't see much success with the MNC model for the object I wanted to detect. But having more iterations helped a bit. I also used the 5 stage, which seemed slightly better than the 3 stage. I don't think my experience would help you.. If you have a success story, please share. :) |
Hi,
I am trying to train my own models using my own image data. I am pretty much copying and modifying your mnc_5stage code for Pascal VOC dataset.
In yml file, should MASK_SIZE be the same as the number of classes?
I use 3000 image files and took roughly 3 days to do 4000 iterations. Does this sound right? I was originally using 5000 images and 45000 iterations, but it was taking too long..
My model at this moment, it has only one type of objects, so I believe 3000 images are enough. But would you say 4000 iterations are enough as well?
In the log, the accuracy_det and accuracy_det_ext appear to be over 95%, but when I manually test it on IPython Notebook using your Demo codes, it seems to detect only one or two instances and more often zeros when I test it on training dataset which the model should know well. And it doesn't seem to detect instances correctly on testing dataset. Could you give me some tips on improving its performance?
Also along with the line of accuracy, it seems to put bounding boxes a lot smaller than actual objects. Which threshold can I play with to control the bounding boxes?
This is a lot of questions, but I appreciate your patience and response. :)
The text was updated successfully, but these errors were encountered: