-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train Faster R-CNN on my own dataset ? #243
Comments
Hi, I did the same thing. At first you should work through the code and check out, where which functions are called and you should try the demo.py. Afterwards in the readme is a section called "Beyond the demo" which explains the basic proceeding. Additionally, you should search for issues in this repo. There are actually quite a lot similar issues that ask the same question. Furthermore, here is a really good documentation of the "how to train on own dataset". This helped me a lot. Finally, I'll sum up the main steps for you:
There are just the main steps I figured out during my work with the framework. It will take some time to get into it and several problems will occur by using the framework with your own dataset. The most problems are already addressed within other issues in this repo. It might also be very helpful to use a python IDE that supports debugging. Hope that helps. =) |
Hi @ednarb29 , thanks for you answer sincerely, I will try it now. Hope I can do it. |
You can easily check that out, the file should be under FRCN_ROOT/data/cache/ Of course if this file is huge it needs some time even to load the cache file I guess. Maybe you should debug that. Naively you can delete the cache file and start training again. So you can compare the time it needs to create the dataset / load the cache file. |
Hi @ednarb29 , I have tried method you said. There are some errors about selective_search I can't handle like following. |
@JohnnyY8 : Can you paste here your configuration information which are printed on terminal. I guess that your configuration file still choose the proposal method is selective search |
@tiepnh Hi! You are right. According to tutorial "https://github.com/deboc/py-faster-rcnn/tree/master/help", I use command ($ echo 'MODELS_DIR: "$PY_FASTER_RCNN/models"' >> config.yml) to generate config.yml. But if I change it to "experiments/cfgs/faster_rcnn_end2end.yml", it looks ok. |
@tiepnh @ednarb29 I can starting training, it looks close to right way. I will check it on validation set after finishing training. Thanks for you guys' help!!! |
@JohnnyY8 : This array will point to your image set files. As your pasted code, there are no image set file for testing or they use same image set for both training and testing. |
@tiepnh Cool! Your answer is very useful and clear! Thanks so much! |
@tiepnh Hi!
but still got following errors: Is there something wrong ? |
For the TEST_IMDB, it just point to set of image use to test. So, if your use same image set for TRAIN_IMDB and TEST_IMDB, it will train and test the network in same dataset. The error "max_overlaps" it seem that your data have no foreground ROI or background ROI. So, please check again your py file, which use to read your dataset |
@tiepnh Thank you so much! You are so nice. |
@tiepnh @ednarb29 Hi! |
At first I would suggest you to start training and testing with a very little data set (100 images and 1k iterations), that you can debug the training and testing quite fast. Does the problem occur during creation of the data set or during training? |
@ednarb29 I am not quite sure, several times before, I can load data about 2~4 hours (also load repeatly). But this time is stranger. We do not change any codes, just restart the training. The time for loading data is very long! |
@ednarb29 Do you just load data for once after start traing ? |
I am not sure about that because this kind of problem did not occur for me... If I had problems with loading the data set I just removed the cache file and that solved the problem in most cases because changes on the original data set are not updated in the cache file. Sorry dude. |
Hi @JohnnyY8, |
@ednarb29 Not to be sorry, I should thank you! |
@deboc That is right. I will try it. Thank you! |
I just bet it's not negligible. |
@deboc Oh, I see. Only add print codes. So that is stranger for us. |
Did removing the print command speed up the process? And did removing the cache file and build the database again solve your problem with the |
@ednarb29 I don't try to remove the print command. Because I really want to know the process, I guss this time consuming is negligible. |
Cool, so if it works fine you can close the issue? =) |
@ednarb29 Sure, thank you very much! |
@deboc , I have a quick question. I get the following error when I executed the following command: Command: Error:
I read that there's basically a difference in the expected size that the network has been setup to expect. The one thing that I can imagine is that I am using the faster-rcnn VGG16 model( data/faster_rcnn_models/VGG16_faster_rcnn_final.caffemodel )? Is it possible to use this model instead of the one you mentioned( data/imagenet_models/VGG_CNN_M_1024.v2.caffemodel ) ? P.S. Thank you for that awesome tutorial ! |
Hi GeorgiAngelov, |
inds = np.reshape(inds, (-1, 2)) because of second demotion of reshaping is 2 you should use only even numbers of images in data set. |
@GeorgiAngelov The tutorial of @deboc uses the image_net model VGG_CNN_M_1024.v2.caffemodel. You can get it by following the steps here https://github.com/deboc/py-faster-rcnn#download-pre-trained-imagenet-models. |
Thanks I had the same problem:
I deleted the cache file and it is now running. |
What tool should I should to create imdb files? |
@ednarb29 , removing cache file fixed problem for me regarding the |
@ArturoDeza |
@VanitarNordic , I don't think there's a quick recipe for that. I've been following this setup: |
@ArturoDeza |
@VanitarNordic What is the error you've been getting? You should create a new issue with the error you get when you run the end2end training script, that way we can be more helpful. |
@ArturoDeza |
Hi! Can anyone help me with that? |
I"m using INRIA Person data set. After running below command ./tools/train_faster_rcnn_alt_opt.py --gpu 0 --net_name INRIA_Person --weights data/imagenet_models/VGG_CNN_M_1024.v2.caffemodel --imdb inria_train --cfg config.yml I got a error Can you please let me know reason behind this error |
Do you have any solutions for this error? Thanks |
@medhani It's not finding any images, which means either the path to your images is wrong, or there are no images listed in your image set text file. |
@Roskgp96 Have you able find a solution for the below error? |
I used another modification of fasterrcnn in TF and it saves permutation into snapshots. In my case, I actually traced the code and found out that I was using an OLD permutation loaded with my snapshot. That means, if you modified the number of testing or training data, it is possible you would access outside the permutation array and return zero index, and then load nothing from roidb. A simply solution is to delete all snapshots or modify the permutation in your train_val.py after loaded. Hope it helps. |
@deboc Apologies for digging up an old discussion topic, but you mentioned that we have the option to reuse a pre-trained model that already classifies our objects OR train our own model from scratch. Would that put any restrictions on how we train our faster R-CNN? Would the joint approximation (end-2-end) approach be better than the alternate training method? |
Hi,
|
Excuse me.When I trained my own model, I used the model I trained to run demo.py to detect the graph. When the pixel was large (5000,3000), the results were all white include image.If the image pixel is not too large, there is no problem.What's the reason?(当我训练好自己的模型时,用自己训练的模型运行demo.py,去检测图形,当检测图片像素很大时(5000,3000),检测出来的结果是全白包括图片。如果图片像素不是太大,就不会出问题。请问这是什么原因?) |
@mantou22 sorry, I do not understand "the results were all white"? |
have you fixed it? |
Hey, I have the same problem. Have you fixed it? |
Hi everyone:
I want to train Faster R-CNN on my own dataset. Because Faster R-CNN does not use selective search method, I comment the code about selective. However, there are still some errors about roidb, and so on.
Can anybody help me ? I am not quite sure what should I do for training Faster R-CNN. It is a little complicated for me.
Thanks so much!
The text was updated successfully, but these errors were encountered: