Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyPI 1.16.0 release requires specifying the execution provider during InferenceSession creation #17631

Closed
cbourjau opened this issue Sep 20, 2023 · 5 comments
Assignees

Comments

@cbourjau
Copy link
Contributor

cbourjau commented Sep 20, 2023

Describe the issue

The latest 1.16.0 CPU-release (aka onnxruntime) on PyPI appears to have AzureExecutionProvider as a default provider (introduced by #17025 ?). This means that the EP is now ambiguous with respect to the second default CPUExecutionProvider causing an exception:

ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are 
required to explicitly set the providers parameter when instantiating InferenceSession. For example, 
onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

The text in the exception is misleading. I just checked and as of 1.15.1 it was not necessary to specify an explicit provider through the Python interface. No warning is given in 1.15.1 about this change either.

This is a breaking change in the Python interface. Furthermore, the choice of making the AzureExecutionProvider EP one of the defaults seems strange given that it appears to be undocumented.

To reproduce

Create an inference session via the Python API (using a random model from the onnx/onnx repository):

import onnxruntime as ort
ort.InferenceSession("onnx/onnx/backend/test/data/node/test_abs/model.onnx")

Urgency

This is a breaking change with no prior warnings on possibly the most common way to initialize a session.

Platform

Mac

OS Version

12.3.1

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

Edit: Clarified that this is about the CPU-release

@cbourjau cbourjau changed the title Pypi release requires specifying the execution provider during InferenceSession creation PyPi 1.16.0 release requires specifying the execution provider during InferenceSession creation Sep 20, 2023
@cbourjau cbourjau changed the title PyPi 1.16.0 release requires specifying the execution provider during InferenceSession creation PyPI 1.16.0 release requires specifying the execution provider during InferenceSession creation Sep 20, 2023
@tianleiwu
Copy link
Contributor

The providers parameter is required for onnxruntime-gpu 1.15.1 in linux/windows.

Maybe this issue is for cpu-only package.

@cbourjau
Copy link
Contributor Author

Yes, this is for the cpu package, or at least that is what I think. I installed using:

$ pip install onnxruntime==1.16

@pranavsharma
Copy link
Contributor

Yes, this looks like a valid issue. The workaround is to explicitly supply the CPUExecutionProvider in the list. @RandySheriff - can you take a look?

@ahmedharbaoui
Copy link

+1 also having this issue

lu-ohai added a commit to oracle/accelerated-data-science that referenced this issue Sep 21, 2023
Gourieff added a commit to Gourieff/sd-webui-reactor that referenced this issue Sep 23, 2023
Temporary fix due to the MS Issue of 1.16.0 ORT library microsoft/onnxruntime#17631
Default 'AzureExecutionProvider' instead of CPU
Awaiting for the 1.16.1 patch
Gourieff added a commit to Gourieff/comfyui-reactor-node that referenced this issue Sep 23, 2023
Temporary fix due to the MS Issue of 1.16.0 ORT library microsoft/onnxruntime#17631
Default 'AzureExecutionProvider' instead of CPU
Awaiting for the 1.16.1 patch
mroxso added a commit to mroxso/piper-recording-studio that referenced this issue Sep 25, 2023
David-davidlxl added a commit to David-davidlxl/Lobsterpincer-Spectator-For-Win-RPi-Combo that referenced this issue Oct 1, 2023
There is currently a bug in `onnxruntime==1.16.0` (microsoft/onnxruntime#17631), so the installation instructions have been revised to reflect that. The usage of the "requirements.txt" file has also been clarified.
NeonDaniel pushed a commit to NeonGeckoCom/neon_speech that referenced this issue Oct 3, 2023
NeonDaniel added a commit to NeonGeckoCom/neon_speech that referenced this issue Oct 3, 2023
# Description
Add `get_stt` timing metric for audio input

# Issues
NeonGeckoCom/neon-minerva#3

# Other Notes
Includes patch for microsoft/onnxruntime#17631
Updates license tests for dependency with undefined MIT license

---------

Co-authored-by: Daniel McKnight <daniel@neon.ai>
@mshr-h
Copy link

mshr-h commented Oct 13, 2023

1.16.1 was released with the below fix commit.
1c245e6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants