Replies: 1 comment 1 reply
-
@irfan-gif YOLOv5 models can run inference at image sizes from 0 up to the size they were trained on (1280 for the P6 models shown above). Running inference at reduced size will be faster but less accurate. The plot above shows you evaluations of each P6 model across a range of image sizes. This figure is reproducible using the commands shown in the Figure Notes below the figure:
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Can some explain me??
Beta Was this translation helpful? Give feedback.
All reactions