Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get Bounding Boxes? #1583

Closed
tanjary21 opened this issue Dec 28, 2022 · 2 comments
Closed

How to get Bounding Boxes? #1583

tanjary21 opened this issue Dec 28, 2022 · 2 comments

Comments

@tanjary21
Copy link

tanjary21 commented Dec 28, 2022

Hi.

I'm trying to pass an image to YOLOX and get bounding box coordinates of the detected objects in the image frames, together with confidence and class prediction. But the result I'm getting does not seem to make sense.


detector = YOLOX().eval()

with torch.no_grad():
    img = torch.permute(torch.Tensor(mmcv.imread('/DATA/train/000/img1/000001.jpg')), (2,0,1)).unsqueeze(0)
    result = detector(img)

print(result.shape) # gives torch.Size([1, 3549, 85]). How to convert this to detection bboxes?

How do I convert the result to bounding box coordinates, confidence scores and class labels? Please help

@gedance
Copy link

gedance commented Jan 3, 2023

Hello,
For inference, you can take a look at the "/demo/ONNXRuntime/onnx_inference.py" or the others in the demo folder. :) I got a lot of help from these sources.

Cheers!

@Joker316701882
Copy link
Member

This issue is temporally closed since @gedance has provided the correct answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants