Skip to content

Batch inference in caffe2 export #1030

Closed
@ashnair1

Description

@ashnair1

Batch inference using the default detectron2 model was previously discussed in #282. I was wondering whether it would be possible to do the same with the exported model (onnx or caffe2)

Passing in a batch of images, the final detections are all stacked together so you can't tell which detections belong to which image.
So for example, if I pass in 3 images, I'll receive 20 detections without knowing how many detections belong to the 1st image, how many belong to the 2nd image etc.

Is there a way this issue can be addressed?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions