Closed
Description
Batch inference using the default detectron2 model was previously discussed in #282. I was wondering whether it would be possible to do the same with the exported model (onnx or caffe2)
Passing in a batch of images, the final detections are all stacked together so you can't tell which detections belong to which image.
So for example, if I pass in 3 images, I'll receive 20 detections without knowing how many detections belong to the 1st image, how many belong to the 2nd image etc.
Is there a way this issue can be addressed?