Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyPI 1.16.0 release requires specifying the execution provider during InferenceSession creation #279

Open
autumnwindbleak opened this issue Sep 25, 2023 · 1 comment

Comments

@autumnwindbleak
Copy link

autumnwindbleak commented Sep 25, 2023

Installed with
pip install cnocr`
and found that the onnxruntime is installed with version 1.16.0. This causes a problem specified in PyPI 1.16.0 release requires specifying the execution provider during InferenceSession creation.

Error message:

Traceback (most recent call last):
  File "C:\Users\Inch\PycharmProjects\genshin_relic_helper\test\test.py", line 3, in <module>
    ocr = CnOcr(det_model_name='naive_det')
  File "C:\Users\Inch\venv\lib\site-packages\cnocr\cn_ocr.py", line 155, in __init__
    self.rec_model = rec_cls(
  File "C:\Users\Inch\venv\lib\site-packages\cnocr\recognizer.py", line 138, in __init__
    self._model = self._get_model(context)
  File "C:\Users\Inch\venv\lib\site-packages\cnocr\recognizer.py", line 191, in _get_model
    model = onnxruntime.InferenceSession(self._model_fp)
  File "C:\Users\Inch\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
    raise e
  File "C:\Users\Inch\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "C:\Users\Inch\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 451, in _create_inference_session
    raise ValueError(
ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

I have rollback onnxruntime to 1.15.1 and make everything work.

@breezedeus
Copy link
Owner

Thanks. Will fix this problem in the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants