### Search before asking - [X] I have searched the Inference [issues](https://github.com/roboflow/inference/issues) and found no similar bug report. ### Bug The same workflow tested, reporting only latency for single frame processing inside [WorkflowRunner.run_workflow(...)](https://github.com/roboflow/inference/blob/8816972b413ccc6b050c8f5f192ac38785a15521/inference/core/interfaces/stream/model_handlers/workflows.py#L10) function: MacBook, bare metal in script using InferencePipeline directly - ~40ms MacBook, inside docker container, behind API - ~110ms MacBook dies - 100% CPU utilisation when `InferencePipeline` is running inside container 😢 ### Environment _No response_ ### Minimal Reproducible Example _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!