-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to filter only stop sign and pedestrian #29
Comments
Can someone please specify the arguments to pass to the mergeAnnotationFiles.py for doing this? While the
but the output always comes: No annotation files found, exiting... please help me out. |
Okay I found a way of at least running the mergeAnnotationFiles.py. I copied it to the this merged all the annotations from frames present at any sub directory of the |
I think we have to modify mergeAnnotation.py such that only signs we want to put in csv file. |
@sudonto if you were able to filter only stop and pedestrian signs please tell how you did it. I am currently able to extract only pedestrian and stop signs using: this creates a folder named annotations which will contain all the pics where pedestrianCrossing and stopAhead signs are available. But I am still unable to build the annotations for them I even tried: |
I will let you know soon I have the solution. |
Hey @YashBansod , I was able to build only stop sign and pedestriancrosssing sign into one csv file. In my case, I use filterAnnotationFile.py first to generate those signs into seperate folder and then execute mergeAnnotationFiles.py to combine 2 csv file (stop and pedestrian sign) into one file. Please try. |
Hi @sudonto |
Yes, you absolutely correct about this. I started to analyze each code of dataprep.py week ago and I found that there is a line of code that filters out the annotation tag other than the desired signs. Now, I have 2 problems afterward: my loss function is very large (arround 200 after 200 epochs) and the produced model always give me an error in inference mode (division by zero when calculating the IoU, sighhh). I think I'd like to search for another SSD algorithms. PS.: In case you find another robust SSD algorithm yet offer simple code, would you mind to share with me? :) |
@sudonto I didn't get any div by zero errors and don't really understand why its happening for you, but you can try re cloning the repo and do the changes I mentioned in #27 and it should work. Also, my loss was even bigger. It was around 500 after 200 epochs and around 400 after 2000 epochs. Furthermore, in tensorboard, I could see that my model was over fitting. Instead of working on SSDs, which I was even testing in the first place as an alternative to Faster RCNNs, I would rather put my effort on YOLO v2. Anyway, am currently involved in a different project but if I do find a good SSD implementation, I will post it here. |
@sudonto were you able to run the |
Yes, I was able to run |
Great work! Btw, I want to train the model but stuck in this step:
"Follow instructions in the LISA Traffic Sign Dataset to create 'mergedAnnotations.csv' such that only stop signs and pedestrian crossing signs are shown"
How do I filter all except stop sign and pedestrian cross? I read the mergeAnnotationFiles.py but no clue. Please help me.
The text was updated successfully, but these errors were encountered: