New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detection differences between YOLO PyTorch frameworks? #83
Comments
could you show example output of two repository? |
Sure. This is what I get with YOLOR (accurately): This is what I get with the YOLOv3 repo: |
oh, i mean output image with prediction. |
by the way, the main reason i think it is due to inference size. |
So you were right, the inference sizes I used were the default ones, that's why they were different. This link was useful to understand inference size: ultralytics/yolov3#232 With identical inference sizes I get similar result even though one class is never detected using YOLOv3 repo while it is detected using YOLOR... This is probably a bug in YOLOv3 repo as I think I should be able to use any Pytorch repo for inference and still get the same results (YOLOv3 or YOLOv4 or YOLOR). |
I recently used ultralytics YOLOv3 archived repository to convert darknet weights to pytorch weights. I then ran inference on a set of images.
Then, I used this yolor repository with the converted YOLOv3 Pytorch weights (and cfg file) to run inference on the same dataset: it appears results are way better, detection is more accurate.
I am wondering why results are better with this repository: what's the difference between these two detectors? How comes that I can run inference using YOLOv3 weights with a YOLOR repository? I assume YOLOR reads my cfg file and detect these are YOLOv3 weights and then run YOLOv3 inference on my images but why are the results better than with the YOLOv3 repo then?
The text was updated successfully, but these errors were encountered: