You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I have an Image that is 1400x1900, what happens here?
Does it resize it to Mx640 where M is just the short side converted from the ratio?
Does it get padded?
What is the output here?
Is it possible to mimic square training like how YOLOv5 does it where it resizes it to the target 640x640, keeps ratio by padding it?
Additional
It seems that pytorch2onnx results for both ONNX and Pytorch resizes the input image into the test_cfg image size and that the predictions are predictions for that image size and not rescaled for the original image resolution. It also seems that it doesn't pad the image.
Meanwhile, the image_demo provides the outputs rescaled to the original input shape.
Could you please elaborate on this? It is rather confusing
The text was updated successfully, but these errors were encountered:
I would like to understand how mmdetection resizes the images and pad them and how does
keep_ratio
come into play.Consider the config
If I have an Image that is 1400x1900, what happens here?
Mx640
whereM
is just the short side converted from the ratio?Is it possible to mimic square training like how YOLOv5 does it where it resizes it to the target 640x640, keeps ratio by padding it?
Additional
It seems that
pytorch2onnx
results for both ONNX and Pytorch resizes the input image into thetest_cfg
image size and that the predictions are predictions for that image size and not rescaled for the original image resolution. It also seems that it doesn't pad the image.Meanwhile, the
image_demo
provides the outputs rescaled to the original input shape.Could you please elaborate on this? It is rather confusing
The text was updated successfully, but these errors were encountered: