This notebook shows how to do inference with Automatic Device Selection (AUTO). To learn more about the logic of this mode, refer to the Automatic device selection article.
A basic introduction to use Auto Device Selection with OpenVINO.
This notebook demonstrates how to compile a model with AUTO device, compare the first inference latency (model compilation time + 1st inference time) between GPU device and AUTO device, show the difference in performance hints (THROUGHPUT and LATENCY) with significant performance results towards the desired metric.
This is a self-contained example that relies solely on its own code.
We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.
For details, please refer to Installation Guide.