Search before asking
Bug
The same workflow tested, reporting only latency for single frame processing inside WorkflowRunner.run_workflow(...) function:
MacBook, bare metal in script using InferencePipeline directly - ~40ms
MacBook, inside docker container, behind API - ~110ms
MacBook dies - 100% CPU utilisation when InferencePipeline is running inside container 😢
Environment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?