Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch inference in caffe2 export #1030

Closed
ashnair1 opened this issue Mar 11, 2020 · 6 comments
Closed

Batch inference in caffe2 export #1030

ashnair1 opened this issue Mar 11, 2020 · 6 comments
Labels
contributions welcome welcome contributors enhancement Improvements or good new features

Comments

@ashnair1
Copy link
Contributor

ashnair1 commented Mar 11, 2020

Batch inference using the default detectron2 model was previously discussed in #282. I was wondering whether it would be possible to do the same with the exported model (onnx or caffe2)

Passing in a batch of images, the final detections are all stacked together so you can't tell which detections belong to which image.
So for example, if I pass in 3 images, I'll receive 20 detections without knowing how many detections belong to the 1st image, how many belong to the 2nd image etc.

Is there a way this issue can be addressed?

@ashnair1
Copy link
Contributor Author

ashnair1 commented Mar 15, 2020

So detector_results found here is an InstanceList and contains the field indices that indicates which image a detection belongs to. However, when I tried to export this field, it is exported as a constant. How do we go about exporting the indices field as a proper output?

Edit: I was able to export the batch_ids by returning it when it was created by the BoxwithNMSLimit op.

@ashnair1 ashnair1 changed the title Batch inference with onnx model Batch inference with exported model Mar 18, 2020
@ashnair1
Copy link
Contributor Author

ashnair1 commented Mar 18, 2020

@ppwwyyxx

So I was able to export the batch ids, but that doesn't resolve the problem. I'll illustrate with an example. These are the two images of the batch, I'm sending in to the model:

000000000139_resize
000000000285_resize

Now here are the formatted detections the model makes:

Category Score BatchID CatID
person 0.28 0 0
person 0.09 0 0
bench 0.09 0 13
bear 1.00 0 21
bottle 0.15 0 39
bottle 0.08 0 39
chair 0.69 0 56
chair 0.67 0 56
chair 0.36 0 56
chair 0.32 0 56
chair 0.28 0 56
chair 0.24 0 56
chair 0.15 0 56
chair 0.15 0 56
chair 0.11 0 56
chair 0.09 0 56
chair 0.09 0 56
chair 0.08 0 56
chair 0.06 0 56
potted plant 0.38 0 58
dining table 0.07 0 60
tv 0.82 0 62
vase 0.08 0 75
bear 0.97 1 21
tv 0.30 1 62

Originally, I was expecting detections from image 1 to have batch id 0 and those from image 2 to have batch id 1. But this doesn't seem to be the case. As you can see, the detections get mixed up, hence why bear is getting detected in image 1 and tv in image 2.

Any ideas as to why this is happening?

@ppwwyyxx
Copy link
Contributor

Batch inference of the exported model is not supported now.
It can be supported if the "batch_split_nms" output from the caffe2 operation can be parsed and exported. Labeling this issue as a feature request.

@tkaleczyc-ats
Copy link

Are there any plans for supporting batch inference in the upcoming future?

@ppwwyyxx
Copy link
Contributor

Other export methods support batch inference according to https://detectron2.readthedocs.io/en/latest/tutorials/deployment.html

@ppwwyyxx ppwwyyxx changed the title Batch inference with exported model Batch inference in caffe2 export Apr 21, 2021
@ppwwyyxx
Copy link
Contributor

Caffe2 is being deprecated in favor of pytorch, so this issue won't be resolved. Closing as won't fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributions welcome welcome contributors enhancement Improvements or good new features
Projects
None yet
Development

No branches or pull requests

3 participants