You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to pass an image to YOLOX and get bounding box coordinates of the detected objects in the image frames, together with confidence and class prediction. But the result I'm getting does not seem to make sense.
detector = YOLOX().eval()
with torch.no_grad():
img = torch.permute(torch.Tensor(mmcv.imread('/DATA/train/000/img1/000001.jpg')), (2,0,1)).unsqueeze(0)
result = detector(img)
print(result.shape) # gives torch.Size([1, 3549, 85]). How to convert this to detection bboxes?
How do I convert the result to bounding box coordinates, confidence scores and class labels? Please help
The text was updated successfully, but these errors were encountered:
Hello,
For inference, you can take a look at the "/demo/ONNXRuntime/onnx_inference.py" or the others in the demo folder. :) I got a lot of help from these sources.
Hi.
I'm trying to pass an image to YOLOX and get bounding box coordinates of the detected objects in the image frames, together with confidence and class prediction. But the result I'm getting does not seem to make sense.
How do I convert the result to bounding box coordinates, confidence scores and class labels? Please help
The text was updated successfully, but these errors were encountered: