New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
setting NUM_CLASSES #27
Comments
Thanks for the beginning tips. That's strange. I have seen your log error. Could you remove the ',' after 2 (which makes it a tuple but we need int) and directly try to modify this line? |
hmm. it came something new. new errors..
|
The message: "Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you." What is your category ids in annotations? |
I didn’t define it yet. That should be fine right? i’ve used same setting for CenterMask 2, also in detection 2 framework. =========update =============== ====== update again ======== |
You don't need the background class in the annotation file. I think the problem of the converted json comes with incorrect image size setting. This is the original |
could be something relative to wrong size. but in both converted and original json files, the sizes are the same. |
Ahha. we can use then i'm now facing this issue: I think i can close this issue very soon.. |
Great. Leave a message here when you have new progress. |
I'm not sure should I report the cuda issue in this ticket. but for the issue I've mentioned, I think this is very fatal for your open source project. as you know, one of the solutions of the problem is to reduce the batch size. but I've already reduced it to 1... I've tried to update the pytorch to 1.7. but the detectron 2 you've used is obviously a acient version, which did not support pytorch 1.7 and some support function (like pls consider constructing a stable environment in a docker. or even in a .txt...or even update the detectron 2 in a higher version But anyways, thanks again for your job |
That's strange. I did't meet this issue on my machine, and some other users have trained on their own dataset and reported results already in the issue. Maybe you could just take the mask head design code and combine with some codebases that you could train/inference smoothly. Thanks for watching the work. |
I think you must have local cuda deployment in your machine. try using other server. then you'll know this is something that depends on your luck....
that is also strainge for me... and I've noticed, that he might use windows for training...what a hacker...
that is not quite realistic for many researchers...i think I might take more than 2 weeks for trying and coding |
I actually trained/tested the model on two different servers because of changing working place and both works fine. I guess the reported user is just using windows for showing results (you could contact him probably). Maybe we can wait some more users to also report their usage. Thanks a lot for feedback. |
I also encountered the same problem as you. How did you convert the json file to remove the background? Do you have a conversion script file? |
in the beginning i would like to give others some notice: even though you've install pytorch via anaconda with cudatoolkit. But still. it is just for the pytorch. not for detectron2. pls consider using cuda package locally or use a docker.
question 1:
train_log_init.txt
I've found out you've noticed, that we should change
MODEL.ROI_HEADS.NUM_CLASSES
andMODEL.RETINANET.NUM_CLASSES
. I've changed them indetectron2/config/defaults.py
Or tried to add the params in
all.sh
via addingMODEL.ROI_HEADS.NUM_CLASSES 2, MODEL.FCOS.NUM_CLASSES 2, MODEL.RETINANET.NUM_CLASSES 2
for my 2 classes (background not included). but none of them helps...
The error:
AssertionError: A prediction has category_id=62, which is not available in the dataset.
question 2:
the training seems stop immediately.
i've changed the
MAX_ITER
in yaml file, but it was not helped..I think the both problems could be relevant, because the model is not trained for 2 classes.
the log file is attached. many thanks for your help!
The text was updated successfully, but these errors were encountered: