Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

i'm using yolov3.weights to test data/sample/, i found there are some wrong bboxes which are different from this projects #821

Closed
J-LINC opened this issue Mar 29, 2023 · 8 comments

Comments

@J-LINC
Copy link

J-LINC commented Mar 29, 2023

I am testing on/data/sample using the weights trained on the coco dataset, but the results obtained are somewhat different from those shown by the author on the homepage. That is why?
2023-03-29 12-18-00 的屏幕截图
2023-03-29 12-18-22 的屏幕截图
2023-03-29 12-18-38 的屏幕截图

@Flova
Copy link
Collaborator

Flova commented Mar 29, 2023

What weights did you use? Did you train from scratch or did you use the official darknet yolov3 ones?

@Flova
Copy link
Collaborator

Flova commented Mar 29, 2023

Also make sure that the image size is 608 and not 416 (default).

@J-LINC
Copy link
Author

J-LINC commented Mar 29, 2023

What weights did you use? Did you train from scratch or did you use the official darknet yolov3 ones?

I'm not using my own training, it's the official training weight yolov3.weights

@J-LINC
Copy link
Author

J-LINC commented Mar 29, 2023

Also make sure that the image size is 608 and not 416 (default).

I really use 416 * 416, and I want to know if the model trained on 416 * 416 predicts that the image of 608 * 608 will work better, just like here.

@Flova
Copy link
Collaborator

Flova commented Mar 29, 2023

I think the full size v3 is trained on 608 iirc..

@J-LINC
Copy link
Author

J-LINC commented Mar 29, 2023

I think the full size v3 is trained on 608 iirc..

I tried and the effect has improved, but there are still some problems. Do you think this is normal?

2023-03-29 19-22-21 的屏幕截图

2023-03-29 19-22-38 的屏幕截图

@Flova
Copy link
Collaborator

Flova commented Mar 29, 2023

I think the errors are in levels that I expect from the v3 model. But I'm wondering why the boxes are slightly different now. It could be a numerical thing due to newer library versions etc., as I can not think of any significant changes to this part of the code. I also evaluated the weight on coco right now and got an mAP of 0.57653, which is slightly better compared to the value in the README.
image

@J-LINC
Copy link
Author

J-LINC commented Mar 31, 2023

I think the errors are in levels that I expect from the v3 model. But I'm wondering why the boxes are slightly different now. It could be a numerical thing due to newer library versions etc., as I can not think of any significant changes to this part of the code. I also evaluated the weight on coco right now and got an mAP of 0.57653, which is slightly better compared to the value in the README. image

Okay,Thank you for your timely response!

@J-LINC J-LINC closed this as completed Mar 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants