Skip to content
This repository has been archived by the owner on Mar 2, 2022. It is now read-only.

improvements with openvino_tiny-yolov3_test.py #34

Open
naufil601 opened this issue Apr 18, 2019 · 16 comments
Open

improvements with openvino_tiny-yolov3_test.py #34

naufil601 opened this issue Apr 18, 2019 · 16 comments

Comments

@naufil601
Copy link

Hi,

Thanks for the python script to increase test accuracy for yolov3-tiny.
I trained yolov3-tiny from darknet, on my own dataset, hence changed class labels according to my data. Converted the model into (.pb) file. Further converted this (.pb) model to IR and Bin files using OpenVino toolkit.

I'm using your python script (openvino_tiny-yolov3_test.py) to preprocess and postprocess my detections from Movidius (Intel's Compute Stick). I have changed the labels as per my need.
The problem is that I'm getting some False Positives in the result. Can you please guide me what kind of tweaks I can make to your script so that it may adapt to my testing environment?

Thanks for help.

@PINTO0309
Copy link
Owner

Please upgrade the version of OpenVINO to 2019 R1. #33

@naufil601
Copy link
Author

@PINTO0309 Thanks for quick reply.
I believe there must be some issue with Myriad plugin of 2018 version.

But if I convert the model for CPU, it even then gives some False Positives with very high confidence. And when I test the same video with darknet, it gives no such False Positives.

Can you suggest some possible reasons for it ?

@PINTO0309
Copy link
Owner

PINTO0309 commented Apr 18, 2019

I don't know exactly what you generated .pb, .bin, .xml, so I can't answer exactly.

  1. Lack of training epoch
  2. Incorrect definition of .cfg
  3. Model conversion forgot the "--tiny" option
  4. BGR to RGB, or RGB to BGR
  5. mean value mistake
  6. normalization mistake

@derek-zr
Copy link

Same result as you. I use 2019 R1 version, but still some false positives. See #32

@naufil601
Copy link
Author

@derek-zr did you find any way out ?

@derek-zr
Copy link

@derek-zr did you find any way out ?

Still trying. I test the pb model, the result is good. But the IR model have many false positives. So i think the reason is the bin and xml conversion.

@naufil601
Copy link
Author

Yes. Same results here.

@derek-zr
Copy link

Yes. Same results here.

Seems I find the reasons. I try cpp script with coco weights and the result is pretty good. So I guess there's some problems when we modify the test.py. So if i want to use the local video with a more high resolution, what should I modify the preprocess code? # @PINTO0309
Then I find that even though i use cpp version with my own model ,there are some false positives, i guess it shows that the original weights should retrain more epoch and be more accurate.

@derek-zr
Copy link

After some experiments, I find some reasons. First, for image aspect ratio that are not 1:1, the draw location calculate may be wrong, so there will be some boxes which are displaced. Second, I wonder that if the preprocess to keep the aspect ration is necessary. Because I find that there are still a lot of false positives in my own model and the coco model is not so accurate.

@PINTO0309
Copy link
Owner

I recognize that there is an aspect rate bug.
Please modify the cpp program referring to the Python program.

@derek-zr
Copy link

derek-zr commented Apr 24, 2019

I recognize that there is an aspect rate bug.
Please modify the cpp program referring to the Python program.

Thanks for your reply. I also find that the cpp version don't have the same preprocess. But i find that even if i use the preprocess in python, there is still a lot of false positives in my own model.

@derek-zr
Copy link

After a longer training process ,the new model still performs bad. And pb model performs great, but IR model produce many false positives.

@naufil601
Copy link
Author

@derek-zr I converged my network to 0.5 loss and even then getting FP with this python script.

@derek-zr
Copy link

yeah. The loss of mine is even lower but still some FP and the detection results is bad. I think it's a bug in the intel conversion python code.

@derek-zr
Copy link

derek-zr commented May 8, 2019

Have you solved it? One author from the intel forum says that it may be the bug in the logistic layer code. https://software.intel.com/en-us/forums/computer-vision/topic/808504#comment-1938506
I try to change the code,but still bad results.

@ybloch
Copy link

ybloch commented Mar 28, 2020

I have the same issue, bad results with IR, good results with TF...
Did someone find a solution for this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants