Skip to content

optimum-cli export onnx fails with ImportError due to NumPy 2.x incompatibility—requires numpy<2 #239

@bconsolvo

Description

@bconsolvo

When running the command below from the Whisper example (https://github.com/amd/RyzenAI-SW/tree/main/demo/ASR/Whisper)

optimum-cli export onnx --model openai/whisper-base.en --opset 17 exported_model_directory

I receive the following error:

C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\torch\onnx\_internal\registration.py:162: OnnxExporterWarning: Symbolic function 'aten::scaled_dot_product_attention' already registered for opset 14. Replacing the existing function with new function. This is unexpected. Please report it on https://github.com/pytorch/pytorch/issues.
  warnings.warn(
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Moving the following attributes in the config to the generation config: {'max_length': 448, 'suppress_tokens': [1, 2, 7, 8, 9, 10, 14, 25, 26, 27, 28, 29, 31, 58, 59, 60, 61, 62, 63, 90, 91, 92, 93, 357, 366, 438, 532, 685, 705, 796, 930, 1058, 1220, 1267, 1279, 1303, 1343, 1377, 1391, 1635, 1782, 1875, 2162, 2361, 2488, 3467, 4008, 4211, 4600, 4808, 5299, 5855, 6329, 7203, 9609, 9959, 10563, 10786, 11420, 11709, 11907, 13163, 13697, 13700, 14808, 15306, 16410, 16791, 17992, 19203, 19510, 20724, 22305, 22935, 27007, 30109, 30420, 33409, 34949, 40283, 40493, 40549, 47282, 49146, 50257, 50357, 50358, 50359, 50360, 50361]}. You are seeing this warning because you've set generation parameters in the model config, as opposed to in the generation config.
C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\transformers\models\whisper\modeling_whisper.py:881: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if input_features.shape[-1] != expected_seq_length:
C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\transformers\models\whisper\modeling_whisper.py:551: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_output.size() != (bsz, self.num_heads, tgt_len, self.head_dim):

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.2.6 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last):  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\runpy.py", line 196, in _run_module_as_main      
    return _run_code(code, main_globals, None,
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\Scripts\optimum-cli.exe\__main__.py", line 7, in <module>
    sys.exit(main())
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
    service.run()
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\commands\export\onnx.py", line 276, in run
    main_export(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\__main__.py", line 418, in main_export        
    onnx_export_from_model(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\convert.py", line 1186, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\convert.py", line 770, in export_models       
    export(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\convert.py", line 903, in export
    config.fix_dynamic_axes(output, device=device, input_shapes=input_shapes, dtype=dtype)
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\base.py", line 220, in fix_dynamic_axes       
    from onnxruntime import GraphOptimizationLevel, InferenceSession, SessionOptions
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\onnxruntime\__init__.py", line 24, in <module>
    from onnxruntime.capi._pybind_state import (
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\onnxruntime\capi\_pybind_state.py", line 32, in <module>
    from .onnxruntime_pybind11_state import *  # noqa
Traceback (most recent call last):
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\numpy\core\_multiarray_umath.py", line 44, in __getattr__
    raise ImportError(msg)
ImportError:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.2.6 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.


Traceback (most recent call last):
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\Scripts\optimum-cli.exe\__main__.py", line 7, in <module>
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\commands\optimum_cli.py", line 208, in main
    service.run()
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\commands\export\onnx.py", line 276, in run
    main_export(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\__main__.py", line 418, in main_export        
    onnx_export_from_model(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\convert.py", line 1186, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\convert.py", line 770, in export_models       
    export(
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\convert.py", line 903, in export
    config.fix_dynamic_axes(output, device=device, input_shapes=input_shapes, dtype=dtype)
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\optimum\exporters\onnx\base.py", line 220, in fix_dynamic_axes       
    from onnxruntime import GraphOptimizationLevel, InferenceSession, SessionOptions
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\onnxruntime\__init__.py", line 61, in <module>
    raise import_capi_exception
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\onnxruntime\__init__.py", line 24, in <module>
    from onnxruntime.capi._pybind_state import (
  File "C:\Users\bconsolv\AppData\Local\miniforge3\envs\ryzen-ai-1.5.1\lib\site-packages\onnxruntime\capi\_pybind_state.py", line 32, in <module>
    from .onnxruntime_pybind11_state import *  # noqa
ImportError

To fix, I had to downgrade to numpy==1.26.4

I suggest that in the requirements.txt file that you add

numpy==1.26.4

Device Information: Processor AMD Ryzen 7 PRO 8840U w/ Radeon 780M Graphics (3.30 GHz)
Software version: Ryzen AI 1.5.1

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions