Description
Describe the feature request
onnxruntime makes it difficult to determine ahead of time, if particular execution provider is available
Installing onnxruntime-gpu package installs 3 different execution providers
- TensorRT
- Cuda
- CPU
We plan to use onnxruntime-gpu package and depending on whether gpu is available or not, we want to switch to CPUExecutionProvider ahead of time to reduce load time overhead.
e.g. in PyTorch once can determine this using torch.cuda.is_available()
can we have something similar in onnxruntime to determine if cuda/particular execution provider is indeed valid for given installation.
https://onnxruntime.ai/docs/api/python/api_summary.html
onnxruntime.get_available_providers() → [list](https://docs.python.org/3/library/stdtypes.html#list)[[str](https://docs.python.org/3/library/stdtypes.html#str)]
Return list of available Execution Providers in this installed version of Onnxruntime. The order of elements represents the default priority order of Execution Providers from highest to lowest.
get_available_providers() returns available provider for installed package but not which one can be actually used during runtime.
Describe scenario use case
use case in https://github.com/quic/aimet