the log as follows:
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m pip install -r requirements.txt
Collecting olive-ai==0.5.0 (from -r requirements.txt (line 1))
Using cached olive_ai-0.5.0-py3-none-any.whl.metadata (3.2 kB)
Requirement already satisfied: pillow in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from -r requirements.txt (line 2)) (11.1.0)
Requirement already satisfied: numpy in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (1.26.4)
Requirement already satisfied: onnx in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (1.16.1)
Requirement already satisfied: optuna in c:\users\m1860\appdata\roaming\python\python310\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (4.2.0)
Requirement already satisfied: pandas in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (2.2.3)
Collecting protobuf<4.0.0 (from olive-ai==0.5.0->-r requirements.txt (line 1))
Using cached protobuf-3.20.3-cp310-cp310-win_amd64.whl.metadata (698 bytes)
Requirement already satisfied: pydantic in c:\users\m1860\appdata\roaming\python\python310\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (2.10.5)
Requirement already satisfied: pyyaml in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (6.0.2)
Requirement already satisfied: torch in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (2.3.1+cpu)
Requirement already satisfied: torchmetrics>=1.0.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (1.6.1)
Requirement already satisfied: transformers in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (4.48.3)
Requirement already satisfied: packaging>17.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torchmetrics>=1.0.0->olive-ai==0.5.0->-r requirements.txt (line 1)) (24.2)
Requirement already satisfied: lightning-utilities>=0.8.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from torchmetrics>=1.0.0->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.11.9)
Requirement already satisfied: filelock in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.17.0)
Requirement already satisfied: typing-extensions>=4.8.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (4.12.2)
Requirement already satisfied: sympy in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.13.3)
Requirement already satisfied: networkx in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.4.2)
Requirement already satisfied: jinja2 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.1.5)
Requirement already satisfied: fsspec in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2024.9.0)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2021.4.0)
Requirement already satisfied: alembic>=1.5.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.14.1)
Requirement already satisfied: colorlog in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (6.9.0)
Requirement already satisfied: sqlalchemy>=1.4.2 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.0.37)
Requirement already satisfied: tqdm in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (4.67.1)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (2025.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (2025.1)
Requirement already satisfied: annotated-types>=0.6.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from pydantic->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.7.0)
Requirement already satisfied: pydantic-core==2.27.2 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from pydantic->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.27.2)
Requirement already satisfied: huggingface-hub<1.0,>=0.24.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.28.1)
Requirement already satisfied: regex!=2019.12.17 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2024.11.6)
Requirement already satisfied: requests in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.32.3)
Requirement already satisfied: tokenizers<0.22,>=0.21 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.21.0)
Requirement already satisfied: safetensors>=0.4.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.5.2)
Requirement already satisfied: Mako in c:\users\m1860\appdata\roaming\python\python310\site-packages (from alembic>=1.5.0->optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.3.8)
Requirement already satisfied: setuptools in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from lightning-utilities>=0.8.0->torchmetrics>=1.0.0->olive-ai==0.5.0->-r requirements.txt (line 1)) (69.5.1)
Requirement already satisfied: intel-openmp==2021.* in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2021.4.0)
Requirement already satisfied: tbb==2021.* in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2021.13.1)
Requirement already satisfied: six>=1.5 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from python-dateutil>=2.8.2->pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.17.0)
Requirement already satisfied: greenlet!=0.4.17 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from sqlalchemy>=1.4.2->optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.1.1)
Requirement already satisfied: colorama in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from tqdm->optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from jinja2->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.0.2)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.4.1)
Requirement already satisfied: idna<4,>=2.5 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.3.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2025.1.31)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from sympy->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.3.0)
Using cached olive_ai-0.5.0-py3-none-any.whl (489 kB)
Using cached protobuf-3.20.3-cp310-cp310-win_amd64.whl (904 kB)
Installing collected packages: protobuf, olive-ai
Attempting uninstall: protobuf
Found existing installation: protobuf 5.29.3
Uninstalling protobuf-5.29.3:
Successfully uninstalled protobuf-5.29.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
quark 0.6.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
vai-q-onnx 1.19.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
Successfully installed olive-ai-0.5.0 protobuf-3.20.3
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m olive.workflows.run --config resnet50_config.json --setup
[2025-02-12 16:15:49,059] [INFO] [run.py:106:dependency_setup] The following packages are required in the local environment: ['onnxconverter-common', 'psutil', 'onnxruntime-gpu']
[2025-02-12 16:15:49,059] [INFO] [run.py:118:dependency_setup] psutil is already installed.
[2025-02-12 16:15:49,127] [WARNING] [run.py:285:check_local_ort_installation] There are one or more onnxruntime packages installed in your environment!
The setup process is stopped to avoid potential conflicts. Please run the following commands manually:
Uninstall all existing onnxruntime packages: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip uninstall -y onnxruntime-genai onnxruntime_extensions onnxruntime-vitisai'
Install onnxruntime-gpu: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip install onnxruntime-gpu'
You can also instead install the corresponding nightly version following the instructions at https://onnxruntime.ai/docs/install/#inference-install-table-for-all-languages
[2025-02-12 16:15:49,127] [INFO] [run.py:125:dependency_setup] Running: C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip install onnxconverter-common
Collecting onnxconverter-common
Using cached onnxconverter_common-1.14.0-py2.py3-none-any.whl.metadata (4.2 kB)
Requirement already satisfied: numpy in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from onnxconverter-common) (1.26.4)
Requirement already satisfied: onnx in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from onnxconverter-common) (1.16.1)
Requirement already satisfied: packaging in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from onnxconverter-common) (24.2)
Collecting protobuf==3.20.2 (from onnxconverter-common)
Using cached protobuf-3.20.2-cp310-cp310-win_amd64.whl.metadata (698 bytes)
Using cached onnxconverter_common-1.14.0-py2.py3-none-any.whl (84 kB)
Using cached protobuf-3.20.2-cp310-cp310-win_amd64.whl (904 kB)
Installing collected packages: protobuf, onnxconverter-common
Attempting uninstall: protobuf
Found existing installation: protobuf 3.20.3
Uninstalling protobuf-3.20.3:
Successfully uninstalled protobuf-3.20.3
WARNING: Failed to remove contents in a temporary directory 'C:\Users\m1860.conda\envs\igpu-example\Lib\site-packages\google~rotobuf'.
You can safely remove it manually.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
quark 0.6.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
vai-q-onnx 1.19.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
Successfully installed onnxconverter-common-1.14.0 protobuf-3.20.2
[2025-02-12 16:15:56,723] [INFO] [run.py:127:dependency_setup] Successfully installed ['onnxconverter-common'].
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m olive.workflows.run --config resnet50_config.json --setup
[2025-02-12 16:16:07,104] [INFO] [run.py:106:dependency_setup] The following packages are required in the local environment: ['onnxconverter-common', 'psutil', 'onnxruntime-gpu']
[2025-02-12 16:16:07,152] [WARNING] [run.py:285:check_local_ort_installation] There are one or more onnxruntime packages installed in your environment!
The setup process is stopped to avoid potential conflicts. Please run the following commands manually:
Uninstall all existing onnxruntime packages: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip uninstall -y onnxruntime-genai onnxruntime_extensions onnxruntime-vitisai'
Install onnxruntime-gpu: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip install onnxruntime-gpu'
You can also instead install the corresponding nightly version following the instructions at https://onnxruntime.ai/docs/install/#inference-install-table-for-all-languages
[2025-02-12 16:16:07,152] [INFO] [run.py:118:dependency_setup] psutil is already installed.
[2025-02-12 16:16:07,152] [INFO] [run.py:118:dependency_setup] onnxconverter-common is already installed.
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m olive.workflows.run --config resnet50_config.json
[2025-02-12 16:16:41,340] [DEBUG] [accelerator.py:155:create_accelerators] Initial execution providers: ['VitisAIExecutionProvider', 'DmlExecutionProvider', 'CPUExecutionProvider']
[2025-02-12 16:16:41,345] [DEBUG] [accelerator.py:168:create_accelerators] Initial accelerators: ['gpu']
[2025-02-12 16:16:41,345] [DEBUG] [accelerator.py:193:create_accelerators] Supported execution providers for device gpu: ['DmlExecutionProvider', 'CPUExecutionProvider']
[2025-02-12 16:16:41,345] [INFO] [accelerator.py:208:create_accelerators] Running workflow on accelerator specs: gpu-dml,gpu-cpu
[2025-02-12 16:16:41,345] [WARNING] [accelerator.py:210:create_accelerators] The following execution provider is not supported: VitisAIExecutionProvider. Please consider installing an onnxruntime build that contains the relevant execution providers.
[2025-02-12 16:16:41,347] [INFO] [engine.py:116:initialize] Using cache directory: cache
[2025-02-12 16:16:41,348] [INFO] [engine.py:272:run] Running Olive on accelerator: gpu-dml
[2025-02-12 16:16:41,348] [DEBUG] [engine.py:1071:create_system] create native OliveSystem SystemType.Local
[2025-02-12 16:16:41,349] [DEBUG] [engine.py:1071:create_system] create native OliveSystem SystemType.Local
[2025-02-12 16:16:41,373] [DEBUG] [engine.py:706:_cache_model] Cached model bd275a67285d20be7f33ff02c8681714 to cache\models\bd275a67285d20be7f33ff02c8681714.json
[2025-02-12 16:16:41,373] [DEBUG] [engine.py:344:run_accelerator] Running Olive in no-search mode ...
[2025-02-12 16:16:41,373] [DEBUG] [engine.py:428:run_no_search] Running ['torch_to_onnx', 'float16_conversion', 'perf_tuning'] with no search ...
[2025-02-12 16:16:41,373] [INFO] [engine.py:862:_run_pass] Running pass torch_to_onnx:OnnxConversion
[2025-02-12 16:16:41,375] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:41,375] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:41,379] [DEBUG] [resource_path.py:156:create_resource_path] Resource path C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started\user_script.py is inferred to be of type file.
Using cache found in C:\Users\m1860/.cache\torch\hub\pytorch_vision_v0.10.0
C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[2025-02-12 16:16:43,461] [DEBUG] [dummy_inputs.py:34:get_dummy_inputs] Using io_config.input_shapes to get dummy inputs
[2025-02-12 16:16:43,461] [DEBUG] [config.py:163:fill_in_params] Missing parameter data_dir for component load_dataset with type dummy_dataset. Set to None.
[2025-02-12 16:16:43,483] [ERROR] [engine.py:947:_run_pass] Pass run failed.
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1817, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\models\bloom\modeling_bloom.py", line 38, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\modeling_utils.py", line 51, in
from .loss.loss_utils import LOSS_MAPPING
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_utils.py", line 19, in
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_deformable_detr.py", line 4, in
from ..image_transforms import center_to_corners_format
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\image_transforms.py", line 56, in
from torchvision.transforms.v2 import functional as F
ModuleNotFoundError: No module named 'torchvision.transforms.v2'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 935, in _run_pass
output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\systems\local.py", line 34, in run_pass
output_model = the_pass.run(model, data_root, output_model_path, point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\olive_pass.py", line 377, in run
output_model = self._run_for_config(model, data_root, config, output_model_path)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 123, in _run_for_config
return self._convert_model_on_device(model, data_root, config, output_model_path, device, torch_dtype)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 401, in _convert_model_on_device
converted_onnx_model = OnnxConversion.export_pytorch_model(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 171, in export_pytorch_model
if is_peft_model(pytorch_model):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 55, in is_peft_model
from peft import PeftModel
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft_init.py", line 22, in
from .auto import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\auto.py", line 31, in
from .config import PeftConfig
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\config.py", line 24, in
from .utils import CONFIG_NAME, PeftType, TaskType
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils_init.py", line 24, in
from .other import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\other.py", line 34, in
from .constants import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\constants.py", line 16, in
from transformers import BloomPreTrainedModel
File "", line 1075, in _handle_fromlist
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1806, in getattr
value = getattr(module, name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1805, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.bloom.modeling_bloom because of the following error (look up to see its traceback):
No module named 'torchvision.transforms.v2'
[2025-02-12 16:16:43,621] [WARNING] [engine.py:366:run_accelerator] Failed to run Olive on gpu-dml.
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1817, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\models\bloom\modeling_bloom.py", line 38, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\modeling_utils.py", line 51, in
from .loss.loss_utils import LOSS_MAPPING
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_utils.py", line 19, in
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_deformable_detr.py", line 4, in
from ..image_transforms import center_to_corners_format
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\image_transforms.py", line 56, in
from torchvision.transforms.v2 import functional as F
ModuleNotFoundError: No module named 'torchvision.transforms.v2'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 345, in run_accelerator
output_footprint = self.run_no_search(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 429, in run_no_search
should_prune, signal, model_ids = self._run_passes(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 824, in _run_passes
model_config, model_id = self._run_pass(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 935, in _run_pass
output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\systems\local.py", line 34, in run_pass
output_model = the_pass.run(model, data_root, output_model_path, point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\olive_pass.py", line 377, in run
output_model = self._run_for_config(model, data_root, config, output_model_path)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 123, in _run_for_config
return self._convert_model_on_device(model, data_root, config, output_model_path, device, torch_dtype)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 401, in _convert_model_on_device
converted_onnx_model = OnnxConversion.export_pytorch_model(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 171, in export_pytorch_model
if is_peft_model(pytorch_model):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 55, in is_peft_model
from peft import PeftModel
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft_init.py", line 22, in
from .auto import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\auto.py", line 31, in
from .config import PeftConfig
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\config.py", line 24, in
from .utils import CONFIG_NAME, PeftType, TaskType
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils_init.py", line 24, in
from .other import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\other.py", line 34, in
from .constants import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\constants.py", line 16, in
from transformers import BloomPreTrainedModel
File "", line 1075, in _handle_fromlist
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1806, in getattr
value = getattr(module, name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1805, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.bloom.modeling_bloom because of the following error (look up to see its traceback):
No module named 'torchvision.transforms.v2'
[2025-02-12 16:16:43,622] [INFO] [engine.py:272:run] Running Olive on accelerator: gpu-cpu
[2025-02-12 16:16:43,635] [DEBUG] [engine.py:706:_cache_model] Cached model bd275a67285d20be7f33ff02c8681714 to cache\models\bd275a67285d20be7f33ff02c8681714.json
[2025-02-12 16:16:43,635] [DEBUG] [engine.py:344:run_accelerator] Running Olive in no-search mode ...
[2025-02-12 16:16:43,635] [DEBUG] [engine.py:428:run_no_search] Running ['torch_to_onnx', 'float16_conversion', 'perf_tuning'] with no search ...
[2025-02-12 16:16:43,635] [INFO] [engine.py:862:_run_pass] Running pass torch_to_onnx:OnnxConversion
[2025-02-12 16:16:43,635] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:43,651] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:43,652] [DEBUG] [resource_path.py:156:create_resource_path] Resource path C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started\user_script.py is inferred to be of type file.
Using cache found in C:\Users\m1860/.cache\torch\hub\pytorch_vision_v0.10.0
[2025-02-12 16:16:43,857] [DEBUG] [dummy_inputs.py:34:get_dummy_inputs] Using io_config.input_shapes to get dummy inputs
[2025-02-12 16:16:43,857] [DEBUG] [config.py:163:fill_in_params] Missing parameter data_dir for component load_dataset with type dummy_dataset. Set to None.
[2025-02-12 16:16:43,857] [ERROR] [engine.py:947:_run_pass] Pass run failed.
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1817, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\models\bloom\modeling_bloom.py", line 38, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\modeling_utils.py", line 51, in
from .loss.loss_utils import LOSS_MAPPING
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_utils.py", line 19, in
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_deformable_detr.py", line 4, in
from ..image_transforms import center_to_corners_format
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\image_transforms.py", line 56, in
the log as follows:
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m pip install -r requirements.txt
Collecting olive-ai==0.5.0 (from -r requirements.txt (line 1))
Using cached olive_ai-0.5.0-py3-none-any.whl.metadata (3.2 kB)
Requirement already satisfied: pillow in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from -r requirements.txt (line 2)) (11.1.0)
Requirement already satisfied: numpy in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (1.26.4)
Requirement already satisfied: onnx in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (1.16.1)
Requirement already satisfied: optuna in c:\users\m1860\appdata\roaming\python\python310\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (4.2.0)
Requirement already satisfied: pandas in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (2.2.3)
Collecting protobuf<4.0.0 (from olive-ai==0.5.0->-r requirements.txt (line 1))
Using cached protobuf-3.20.3-cp310-cp310-win_amd64.whl.metadata (698 bytes)
Requirement already satisfied: pydantic in c:\users\m1860\appdata\roaming\python\python310\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (2.10.5)
Requirement already satisfied: pyyaml in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (6.0.2)
Requirement already satisfied: torch in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (2.3.1+cpu)
Requirement already satisfied: torchmetrics>=1.0.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (1.6.1)
Requirement already satisfied: transformers in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from olive-ai==0.5.0->-r requirements.txt (line 1)) (4.48.3)
Requirement already satisfied: packaging>17.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torchmetrics>=1.0.0->olive-ai==0.5.0->-r requirements.txt (line 1)) (24.2)
Requirement already satisfied: lightning-utilities>=0.8.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from torchmetrics>=1.0.0->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.11.9)
Requirement already satisfied: filelock in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.17.0)
Requirement already satisfied: typing-extensions>=4.8.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (4.12.2)
Requirement already satisfied: sympy in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.13.3)
Requirement already satisfied: networkx in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.4.2)
Requirement already satisfied: jinja2 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.1.5)
Requirement already satisfied: fsspec in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2024.9.0)
Requirement already satisfied: mkl<=2021.4.0,>=2021.1.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2021.4.0)
Requirement already satisfied: alembic>=1.5.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.14.1)
Requirement already satisfied: colorlog in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (6.9.0)
Requirement already satisfied: sqlalchemy>=1.4.2 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.0.37)
Requirement already satisfied: tqdm in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (4.67.1)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (2025.1)
Requirement already satisfied: tzdata>=2022.7 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (2025.1)
Requirement already satisfied: annotated-types>=0.6.0 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from pydantic->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.7.0)
Requirement already satisfied: pydantic-core==2.27.2 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from pydantic->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.27.2)
Requirement already satisfied: huggingface-hub<1.0,>=0.24.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.28.1)
Requirement already satisfied: regex!=2019.12.17 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2024.11.6)
Requirement already satisfied: requests in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.32.3)
Requirement already satisfied: tokenizers<0.22,>=0.21 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.21.0)
Requirement already satisfied: safetensors>=0.4.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.5.2)
Requirement already satisfied: Mako in c:\users\m1860\appdata\roaming\python\python310\site-packages (from alembic>=1.5.0->optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.3.8)
Requirement already satisfied: setuptools in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from lightning-utilities>=0.8.0->torchmetrics>=1.0.0->olive-ai==0.5.0->-r requirements.txt (line 1)) (69.5.1)
Requirement already satisfied: intel-openmp==2021.* in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2021.4.0)
Requirement already satisfied: tbb==2021.* in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from mkl<=2021.4.0,>=2021.1.1->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (2021.13.1)
Requirement already satisfied: six>=1.5 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from python-dateutil>=2.8.2->pandas->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.17.0)
Requirement already satisfied: greenlet!=0.4.17 in c:\users\m1860\appdata\roaming\python\python310\site-packages (from sqlalchemy>=1.4.2->optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.1.1)
Requirement already satisfied: colorama in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from tqdm->optuna->olive-ai==0.5.0->-r requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from jinja2->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.0.2)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.4.1)
Requirement already satisfied: idna<4,>=2.5 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2.3.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from requests->transformers->olive-ai==0.5.0->-r requirements.txt (line 1)) (2025.1.31)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from sympy->torch->olive-ai==0.5.0->-r requirements.txt (line 1)) (1.3.0)
Using cached olive_ai-0.5.0-py3-none-any.whl (489 kB)
Using cached protobuf-3.20.3-cp310-cp310-win_amd64.whl (904 kB)
Installing collected packages: protobuf, olive-ai
Attempting uninstall: protobuf
Found existing installation: protobuf 5.29.3
Uninstalling protobuf-5.29.3:
Successfully uninstalled protobuf-5.29.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
quark 0.6.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
vai-q-onnx 1.19.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
Successfully installed olive-ai-0.5.0 protobuf-3.20.3
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m olive.workflows.run --config resnet50_config.json --setup
[2025-02-12 16:15:49,059] [INFO] [run.py:106:dependency_setup] The following packages are required in the local environment: ['onnxconverter-common', 'psutil', 'onnxruntime-gpu']
[2025-02-12 16:15:49,059] [INFO] [run.py:118:dependency_setup] psutil is already installed.
[2025-02-12 16:15:49,127] [WARNING] [run.py:285:check_local_ort_installation] There are one or more onnxruntime packages installed in your environment!
The setup process is stopped to avoid potential conflicts. Please run the following commands manually:
Uninstall all existing onnxruntime packages: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip uninstall -y onnxruntime-genai onnxruntime_extensions onnxruntime-vitisai'
Install onnxruntime-gpu: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip install onnxruntime-gpu'
You can also instead install the corresponding nightly version following the instructions at https://onnxruntime.ai/docs/install/#inference-install-table-for-all-languages
[2025-02-12 16:15:49,127] [INFO] [run.py:125:dependency_setup] Running: C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip install onnxconverter-common
Collecting onnxconverter-common
Using cached onnxconverter_common-1.14.0-py2.py3-none-any.whl.metadata (4.2 kB)
Requirement already satisfied: numpy in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from onnxconverter-common) (1.26.4)
Requirement already satisfied: onnx in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from onnxconverter-common) (1.16.1)
Requirement already satisfied: packaging in c:\users\m1860.conda\envs\igpu-example\lib\site-packages (from onnxconverter-common) (24.2)
Collecting protobuf==3.20.2 (from onnxconverter-common)
Using cached protobuf-3.20.2-cp310-cp310-win_amd64.whl.metadata (698 bytes)
Using cached onnxconverter_common-1.14.0-py2.py3-none-any.whl (84 kB)
Using cached protobuf-3.20.2-cp310-cp310-win_amd64.whl (904 kB)
Installing collected packages: protobuf, onnxconverter-common
Attempting uninstall: protobuf
Found existing installation: protobuf 3.20.3
Uninstalling protobuf-3.20.3:
Successfully uninstalled protobuf-3.20.3
WARNING: Failed to remove contents in a temporary directory 'C:\Users\m1860.conda\envs\igpu-example\Lib\site-packages\google~rotobuf'.
You can safely remove it manually.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
quark 0.6.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
vai-q-onnx 1.19.0 requires onnxruntime<1.20.0,>=1.17.0, which is not installed.
Successfully installed onnxconverter-common-1.14.0 protobuf-3.20.2
[2025-02-12 16:15:56,723] [INFO] [run.py:127:dependency_setup] Successfully installed ['onnxconverter-common'].
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m olive.workflows.run --config resnet50_config.json --setup
[2025-02-12 16:16:07,104] [INFO] [run.py:106:dependency_setup] The following packages are required in the local environment: ['onnxconverter-common', 'psutil', 'onnxruntime-gpu']
[2025-02-12 16:16:07,152] [WARNING] [run.py:285:check_local_ort_installation] There are one or more onnxruntime packages installed in your environment!
The setup process is stopped to avoid potential conflicts. Please run the following commands manually:
Uninstall all existing onnxruntime packages: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip uninstall -y onnxruntime-genai onnxruntime_extensions onnxruntime-vitisai'
Install onnxruntime-gpu: 'C:\Users\m1860.conda\envs\igpu-example\python.exe -m pip install onnxruntime-gpu'
You can also instead install the corresponding nightly version following the instructions at https://onnxruntime.ai/docs/install/#inference-install-table-for-all-languages
[2025-02-12 16:16:07,152] [INFO] [run.py:118:dependency_setup] psutil is already installed.
[2025-02-12 16:16:07,152] [INFO] [run.py:118:dependency_setup] onnxconverter-common is already installed.
(igpu-example) C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started>python -m olive.workflows.run --config resnet50_config.json
[2025-02-12 16:16:41,340] [DEBUG] [accelerator.py:155:create_accelerators] Initial execution providers: ['VitisAIExecutionProvider', 'DmlExecutionProvider', 'CPUExecutionProvider']
[2025-02-12 16:16:41,345] [DEBUG] [accelerator.py:168:create_accelerators] Initial accelerators: ['gpu']
[2025-02-12 16:16:41,345] [DEBUG] [accelerator.py:193:create_accelerators] Supported execution providers for device gpu: ['DmlExecutionProvider', 'CPUExecutionProvider']
[2025-02-12 16:16:41,345] [INFO] [accelerator.py:208:create_accelerators] Running workflow on accelerator specs: gpu-dml,gpu-cpu
[2025-02-12 16:16:41,345] [WARNING] [accelerator.py:210:create_accelerators] The following execution provider is not supported: VitisAIExecutionProvider. Please consider installing an onnxruntime build that contains the relevant execution providers.
[2025-02-12 16:16:41,347] [INFO] [engine.py:116:initialize] Using cache directory: cache
[2025-02-12 16:16:41,348] [INFO] [engine.py:272:run] Running Olive on accelerator: gpu-dml
[2025-02-12 16:16:41,348] [DEBUG] [engine.py:1071:create_system] create native OliveSystem SystemType.Local
[2025-02-12 16:16:41,349] [DEBUG] [engine.py:1071:create_system] create native OliveSystem SystemType.Local
[2025-02-12 16:16:41,373] [DEBUG] [engine.py:706:_cache_model] Cached model bd275a67285d20be7f33ff02c8681714 to cache\models\bd275a67285d20be7f33ff02c8681714.json
[2025-02-12 16:16:41,373] [DEBUG] [engine.py:344:run_accelerator] Running Olive in no-search mode ...
[2025-02-12 16:16:41,373] [DEBUG] [engine.py:428:run_no_search] Running ['torch_to_onnx', 'float16_conversion', 'perf_tuning'] with no search ...
[2025-02-12 16:16:41,373] [INFO] [engine.py:862:_run_pass] Running pass torch_to_onnx:OnnxConversion
[2025-02-12 16:16:41,375] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:41,375] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:41,379] [DEBUG] [resource_path.py:156:create_resource_path] Resource path C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started\user_script.py is inferred to be of type file.
Using cache found in C:\Users\m1860/.cache\torch\hub\pytorch_vision_v0.10.0
C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\onnxscript\converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
[2025-02-12 16:16:43,461] [DEBUG] [dummy_inputs.py:34:get_dummy_inputs] Using io_config.input_shapes to get dummy inputs
[2025-02-12 16:16:43,461] [DEBUG] [config.py:163:fill_in_params] Missing parameter data_dir for component load_dataset with type dummy_dataset. Set to None.
[2025-02-12 16:16:43,483] [ERROR] [engine.py:947:_run_pass] Pass run failed.
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1817, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\models\bloom\modeling_bloom.py", line 38, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\modeling_utils.py", line 51, in
from .loss.loss_utils import LOSS_MAPPING
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_utils.py", line 19, in
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_deformable_detr.py", line 4, in
from ..image_transforms import center_to_corners_format
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\image_transforms.py", line 56, in
from torchvision.transforms.v2 import functional as F
ModuleNotFoundError: No module named 'torchvision.transforms.v2'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 935, in _run_pass
output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\systems\local.py", line 34, in run_pass
output_model = the_pass.run(model, data_root, output_model_path, point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\olive_pass.py", line 377, in run
output_model = self._run_for_config(model, data_root, config, output_model_path)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 123, in _run_for_config
return self._convert_model_on_device(model, data_root, config, output_model_path, device, torch_dtype)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 401, in _convert_model_on_device
converted_onnx_model = OnnxConversion.export_pytorch_model(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 171, in export_pytorch_model
if is_peft_model(pytorch_model):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 55, in is_peft_model
from peft import PeftModel
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft_init.py", line 22, in
from .auto import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\auto.py", line 31, in
from .config import PeftConfig
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\config.py", line 24, in
from .utils import CONFIG_NAME, PeftType, TaskType
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils_init.py", line 24, in
from .other import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\other.py", line 34, in
from .constants import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\constants.py", line 16, in
from transformers import BloomPreTrainedModel
File "", line 1075, in _handle_fromlist
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1806, in getattr
value = getattr(module, name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1805, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.bloom.modeling_bloom because of the following error (look up to see its traceback):
No module named 'torchvision.transforms.v2'
[2025-02-12 16:16:43,621] [WARNING] [engine.py:366:run_accelerator] Failed to run Olive on gpu-dml.
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1817, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\models\bloom\modeling_bloom.py", line 38, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\modeling_utils.py", line 51, in
from .loss.loss_utils import LOSS_MAPPING
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_utils.py", line 19, in
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_deformable_detr.py", line 4, in
from ..image_transforms import center_to_corners_format
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\image_transforms.py", line 56, in
from torchvision.transforms.v2 import functional as F
ModuleNotFoundError: No module named 'torchvision.transforms.v2'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 345, in run_accelerator
output_footprint = self.run_no_search(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 429, in run_no_search
should_prune, signal, model_ids = self._run_passes(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 824, in _run_passes
model_config, model_id = self._run_pass(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\engine\engine.py", line 935, in _run_pass
output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\systems\local.py", line 34, in run_pass
output_model = the_pass.run(model, data_root, output_model_path, point)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\olive_pass.py", line 377, in run
output_model = self._run_for_config(model, data_root, config, output_model_path)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 123, in _run_for_config
return self._convert_model_on_device(model, data_root, config, output_model_path, device, torch_dtype)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 401, in _convert_model_on_device
converted_onnx_model = OnnxConversion.export_pytorch_model(
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 171, in export_pytorch_model
if is_peft_model(pytorch_model):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\olive\passes\onnx\conversion.py", line 55, in is_peft_model
from peft import PeftModel
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft_init.py", line 22, in
from .auto import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\auto.py", line 31, in
from .config import PeftConfig
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\config.py", line 24, in
from .utils import CONFIG_NAME, PeftType, TaskType
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils_init.py", line 24, in
from .other import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\other.py", line 34, in
from .constants import (
File "C:\Users\m1860\AppData\Roaming\Python\Python310\site-packages\peft\utils\constants.py", line 16, in
from transformers import BloomPreTrainedModel
File "", line 1075, in _handle_fromlist
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1806, in getattr
value = getattr(module, name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1805, in getattr
module = self._get_module(self._class_to_module[name])
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1819, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.bloom.modeling_bloom because of the following error (look up to see its traceback):
No module named 'torchvision.transforms.v2'
[2025-02-12 16:16:43,622] [INFO] [engine.py:272:run] Running Olive on accelerator: gpu-cpu
[2025-02-12 16:16:43,635] [DEBUG] [engine.py:706:_cache_model] Cached model bd275a67285d20be7f33ff02c8681714 to cache\models\bd275a67285d20be7f33ff02c8681714.json
[2025-02-12 16:16:43,635] [DEBUG] [engine.py:344:run_accelerator] Running Olive in no-search mode ...
[2025-02-12 16:16:43,635] [DEBUG] [engine.py:428:run_no_search] Running ['torch_to_onnx', 'float16_conversion', 'perf_tuning'] with no search ...
[2025-02-12 16:16:43,635] [INFO] [engine.py:862:_run_pass] Running pass torch_to_onnx:OnnxConversion
[2025-02-12 16:16:43,635] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:43,651] [DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[2025-02-12 16:16:43,652] [DEBUG] [resource_path.py:156:create_resource_path] Resource path C:\Users\m1860\tmp\RyzenAI-SW\example\iGPU\getting_started\user_script.py is inferred to be of type file.
Using cache found in C:\Users\m1860/.cache\torch\hub\pytorch_vision_v0.10.0
[2025-02-12 16:16:43,857] [DEBUG] [dummy_inputs.py:34:get_dummy_inputs] Using io_config.input_shapes to get dummy inputs
[2025-02-12 16:16:43,857] [DEBUG] [config.py:163:fill_in_params] Missing parameter data_dir for component load_dataset with type dummy_dataset. Set to None.
[2025-02-12 16:16:43,857] [ERROR] [engine.py:947:_run_pass] Pass run failed.
Traceback (most recent call last):
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\utils\import_utils.py", line 1817, in get_module
return importlib.import_module("." + module_name, self.name)
File "C:\Users\m1860.conda\envs\igpu-example\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\models\bloom\modeling_bloom.py", line 38, in
from ...modeling_utils import PreTrainedModel
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\modeling_utils.py", line 51, in
from .loss.loss_utils import LOSS_MAPPING
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_utils.py", line 19, in
from .loss_deformable_detr import DeformableDetrForObjectDetectionLoss, DeformableDetrForSegmentationLoss
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\loss\loss_deformable_detr.py", line 4, in
from ..image_transforms import center_to_corners_format
File "C:\Users\m1860.conda\envs\igpu-example\lib\site-packages\transformers\image_transforms.py", line 56, in