Skip to content

How to extract output tensors in batch mode inference? #21191

Closed Answered by mattiasmar
mattiasmar asked this question in Q&A
Discussion options

You must be logged in to vote

Turns out that OpenVINO supports rich output data structures (list, tuple...) in batch 1 mode. For batch sizes above 1 one has to work with tensors only. Doing that, the regular calls to infer_request.get_output_tensor(0), infer_request.get_output_tensor(1),... works just fine.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by mattiasmar
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant