Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for 1280 Input Size in YOLOv9 Model Architecture #111

Closed
MehmetOKUYAR opened this issue Feb 28, 2024 · 4 comments
Closed

Support for 1280 Input Size in YOLOv9 Model Architecture #111

MehmetOKUYAR opened this issue Feb 28, 2024 · 4 comments

Comments

@MehmetOKUYAR
Copy link

Hello YOLOv9 Development Team,

I am considering using the YOLOv9 model architecture for a project of mine and am planning to set the input size to 1280x1280 for a particular application. I would like to inquire whether the model supports this input size and, if so, whether using this size has any implications for the model's accuracy or performance.

Have you had the opportunity to test the model with this input size previously?
Are there any recommendations or restrictions for using this size?
If the model supports this size, is there any expected impact on performance or accuracy?
I would greatly appreciate any guidance or suggestions you might have on this matter. Additionally, if there are any specific configurations I should be aware of when using this input size, sharing that information would be very helpful.

Thank you!

@Youho99
Copy link

Youho99 commented Mar 9, 2024

Yolov9 works with 1280x1280 input size images.
But I don't know what impact this has.

I will do a test this week (I hope ahah) with a dataset (with small objetcs) with 640 and 1280 as input size images to compare this

@Youho99
Copy link

Youho99 commented Mar 13, 2024

@MehmetOKUYAR

I ran the same yolov9-e training over 100 epochs in 2 different configurations on the same dataset

On the 2 configurations, only the image size changes (1280 and 640)

My dataset is a mix between VisDrone, DGTA_VisDrone, and some background images (the dataset is more than 10k images). It is important to note that the objects in this dataset are small!

I tracked the metrics with MLflow (more information on MLflow tracking here: #87 )

In the image, we see my mAP_0.5 metric.
In salmon: yolov9-e 1280
In blue: yolov9-e 640

image

I can therefore argue that changing the input size has a significant advantage when working with small objects!

@MehmetOKUYAR
Copy link
Author

@Youho99
Thank you very much for sharing your experience with us. It has truly yielded valuable results. The difference in success rate seems to be quite significant. Thanks!

@icaroryan
Copy link

@Youho99 Do you have any other tips to work with small objects other than changing the input size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants