Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SOLVED] ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled #108

Closed
3 tasks done
cguykim opened this issue Sep 21, 2023 · 6 comments

Comments

@cguykim
Copy link

cguykim commented Sep 21, 2023

First, confirm

  • I have read the instruction carefully
  • I have searched the existing issues
  • I have updated the extension to the latest version

What happened?

Always loops back to this 1 specific error.

Steps to reproduce the problem

  1. Generate any image with reactor enabled
  2. Read the Error

Sysinfo

RTX 3080 12G
automatic1111 (1.6.0)

Relevant console log

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
current transparent-background 1.2.4
Launching Web UI with arguments: --xformers --no-half-vae --autolaunch
Stop Motion CN - Running Preload
Set Gradio Queue: True
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\sd.webui\webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 23.9.3, num models: 9
2023-09-21 15:29:09,191 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: C:\sd.webui\webui\extensions\sd-webui-controlnet\annotator\downloads
2023-09-21 15:29:09,378 - ControlNet - INFO - ControlNet v1.1.410
[Vec. CC] Style Sheet Loaded...
Loading weights [327c0c5702] from C:\sd.webui\webui\models\Stable-diffusion\3d core model.safetensors
Creating model from config: C:\sd.webui\webui\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 23.2s (prepare environment: 6.1s, import torch: 4.4s, import gradio: 1.8s, setup paths: 1.0s, initialize shared: 0.3s, other imports: 1.0s, setup codeformer: 0.3s, load scripts: 5.8s, create ui: 1.9s, gradio launch: 0.6s).
Loading VAE weights specified in settings: C:\sd.webui\webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 9.8s (load weights from disk: 1.8s, create model: 0.7s, apply weights to model: 4.2s, apply half(): 1.3s, load VAE: 0.6s, calculate empty prompt: 1.0s).
Checking ReActor requirements... Ok
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
current transparent-background 1.2.4
Checking ReActor requirements... Ok
Launching Web UI with arguments: --xformers --no-half-vae --autolaunch
Stop Motion CN - Running Preload
Set Gradio Queue: True
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\sd.webui\webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 23.9.3, num models: 9
2023-09-21 15:31:36,007 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: C:\sd.webui\webui\extensions\sd-webui-controlnet\annotator\downloads
2023-09-21 15:31:36,141 - ControlNet - INFO - ControlNet v1.1.410
15:31:36 - ReActor - STATUS - Running v0.4.2-b2
[Vec. CC] Style Sheet Loaded...
Loading weights [327c0c5702] from C:\sd.webui\webui\models\Stable-diffusion\3d core model.safetensors
Creating model from config: C:\sd.webui\webui\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 23.2s (prepare environment: 6.4s, import torch: 4.2s, import gradio: 1.6s, setup paths: 0.9s, initialize shared: 0.2s, other imports: 0.9s, setup codeformer: 0.3s, load scripts: 6.3s, create ui: 2.0s, gradio launch: 0.4s).
Loading VAE weights specified in settings: C:\sd.webui\webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 8.9s (load weights from disk: 1.9s, create model: 0.7s, apply weights to model: 3.7s, apply half(): 1.2s, load VAE: 0.3s, calculate empty prompt: 1.0s).
100%|██████████████████████████████████████████████████████████████████████████████████| 45/45 [00:07<00:00,  6.34it/s]
15:32:24 - ReActor - STATUS - Working: source face index [0], target face index [0]████| 45/45 [00:06<00:00,  7.04it/s]
15:32:24 - ReActor - STATUS - Detecting Source Face, Index = 0
*** Error running postprocess_image: C:\sd.webui\webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py
    Traceback (most recent call last):
      File "C:\sd.webui\webui\modules\scripts.py", line 675, in postprocess_image
        script.postprocess_image(p, pp, *script_args)
      File "C:\sd.webui\webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 362, in postprocess_image
        result, output, swapped = swap_face(
      File "C:\sd.webui\webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 279, in swap_face
        source_face, wrong_gender, source_age, source_gender = get_face_single(source_img, face_index=source_faces_index[0], gender_source=gender_source)
      File "C:\sd.webui\webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 198, in get_face_single
        face_analyser = copy.deepcopy(getAnalysisModel())
      File "copy.py", line 172, in deepcopy
      File "copy.py", line 271, in _reconstruct
      File "copy.py", line 146, in deepcopy
      File "copy.py", line 231, in _deepcopy_dict
      File "copy.py", line 146, in deepcopy
      File "copy.py", line 231, in _deepcopy_dict
      File "copy.py", line 172, in deepcopy
      File "copy.py", line 271, in _reconstruct
      File "copy.py", line 146, in deepcopy
      File "copy.py", line 231, in _deepcopy_dict
      File "copy.py", line 172, in deepcopy
      File "copy.py", line 273, in _reconstruct
      File "C:\sd.webui\system\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 33, in __setstate__
        self.__init__(model_path)
      File "C:\sd.webui\system\python\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
        super().__init__(model_path, **kwargs)
      File "C:\sd.webui\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in __init__
        raise e
      File "C:\sd.webui\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
        self._create_inference_session(providers, provider_options, disabled_optimizers)
      File "C:\sd.webui\system\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 451, in _create_inference_session
        raise ValueError(
    ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

---
Total progress: 100%|██████████████████████████████████████████████████████████████████| 45/45 [00:08<00:00,  5.18it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 45/45 [00:08<00:00,  7.04it/s]

Additional information

This error is a little bit different from below strangers Post.
(if you look closely, you will see the difference different, deepcopy, reconstruct )

@cguykim cguykim added bug Something isn't working new labels Sep 21, 2023
@cguykim
Copy link
Author

cguykim commented Sep 21, 2023

I follow your answer from below posts, and it works!

python -m pip install -U pip
pip uninstall -y onnx onnxruntime onnxruntime-gpu onnxruntime-silicon
pip uninstall -y onnxruntime-extensions <<<<< This i didn't have it, but i tried.
pip install onnx==1.14.0 onnxruntime==1.15.0

Thanks!

@Gourieff Gourieff added ✔ solved ⛔ dependencies conflict and removed bug Something isn't working new labels Sep 21, 2023
@Gourieff Gourieff changed the title Value Error ['AzureExecutionProvider', 'CPUExecutionProvider'] deepcopy, reconstruct [SOLVED] Value Error ['AzureExecutionProvider', 'CPUExecutionProvider'] deepcopy, reconstruct Sep 21, 2023
@Gourieff
Copy link
Owner

I follow your answer from below posts, and it works!

python -m pip install -U pip pip uninstall -y onnx onnxruntime onnxruntime-gpu onnxruntime-silicon pip uninstall -y onnxruntime-extensions <<<<< This i didn't have it, but i tried. pip install onnx==1.14.0 onnxruntime==1.15.0

Thanks!

You're welcome!

@Gourieff Gourieff pinned this issue Sep 21, 2023
@Gourieff Gourieff changed the title [SOLVED] Value Error ['AzureExecutionProvider', 'CPUExecutionProvider'] deepcopy, reconstruct [SOLVED] ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled Sep 21, 2023
@Gourieff
Copy link
Owner

So, this is the source of 'AzureExecutionProvider' Issue: microsoft/onnxruntime#17631
Fixed in the last commit 41d5b1e - the ORT requirement is set to 1.15.1 strictly - until Microsoft will fix this moment with 1.16.1 patch

@alenknight
Copy link

I follow your answer from below posts, and it works!

python -m pip install -U pip pip uninstall -y onnx onnxruntime onnxruntime-gpu onnxruntime-silicon pip uninstall -y onnxruntime-extensions <<<<< This i didn't have it, but i tried. pip install onnx==1.14.0 onnxruntime==1.15.0

Thanks!

I tried this and now Comfyui won't load up.... any idea what broke?

C:\AI\ComfyUI>.\python_embeded\python.exe -s ComfyUI\main.py --listen --windows-standalone-build
** ComfyUI start up time: 2023-09-24 16:12:17.313024

Prestartup times for custom nodes:
   0.0 seconds: C:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 24576 MB, total RAM 32704 MB
xformers version: 0.0.21
Traceback (most recent call last):
  File "C:\AI\ComfyUI\ComfyUI\comfy\model_management.py", line 211, in <module>
    import accelerate
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\accelerate\__init__.py", line 3, in <module>
    from .accelerator import Accelerator
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\accelerate\accelerator.py", line 35, in <module>
    from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\accelerate\checkpointing.py", line 24, in <module>
    from .utils import (
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\accelerate\utils\__init__.py", line 142, in <module>
    from .megatron_lm import (
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\accelerate\utils\megatron_lm.py", line 32, in <module>
    from transformers.modeling_outputs import (
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\__init__.py", line 30, in <module>
    from . import dependency_versions_check
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module>
    from .utils.versions import require_version, require_version_core
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\utils\__init__.py", line 34, in <module>
    from .generic import (
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\utils\generic.py", line 33, in <module>
    import tensorflow as tf
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\python\__init__.py", line 37, in <module>
    from tensorflow.python.eager import context
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\python\eager\context.py", line 29, in <module>
    from tensorflow.core.framework import function_pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\google\protobuf\descriptor.py", line 561, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

ERROR: LOW VRAM MODE NEEDS accelerate.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using xformers cross attention
Traceback (most recent call last):
  File "C:\AI\ComfyUI\ComfyUI\main.py", line 72, in <module>
    import execution
  File "C:\AI\ComfyUI\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "C:\AI\ComfyUI\ComfyUI\nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "C:\AI\ComfyUI\ComfyUI\comfy\diffusers_load.py", line 4, in <module>
    import comfy.sd
  File "C:\AI\ComfyUI\ComfyUI\comfy\sd.py", line 12, in <module>
    from . import clip_vision
  File "C:\AI\ComfyUI\ComfyUI\comfy\clip_vision.py", line 1, in <module>
    from transformers import CLIPVisionModelWithProjection, CLIPVisionConfig, CLIPImageProcessor, modeling_utils
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\__init__.py", line 30, in <module>
    from . import dependency_versions_check
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\dependency_versions_check.py", line 36, in <module>
    from .utils import is_tokenizers_available
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\utils\__init__.py", line 34, in <module>
    from .generic import (
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\transformers\utils\generic.py", line 33, in <module>
    import tensorflow as tf
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\python\__init__.py", line 37, in <module>
    from tensorflow.python.eager import context
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\python\eager\context.py", line 29, in <module>
    from tensorflow.core.framework import function_pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "C:\AI\ComfyUI\python_embeded\lib\site-packages\google\protobuf\descriptor.py", line 561, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

C:\AI\ComfyUI>pause
Press any key to continue . . .

@Gourieff
Copy link
Owner

Gourieff commented Oct 7, 2023

@alenknight

  1. Downgrade the protobuf package to 3.20.x or lower.

So, try it, just change its version to 3.20.3 by python_embeded\python.exe -m pip install protobuf==3.20.3 (and don't forget to close ComfyUI before that)

@Gourieff Gourieff closed this as completed Oct 7, 2023
@Gourieff Gourieff unpinned this issue Oct 24, 2023
@ruslawik
Copy link

I have the same error, but I can't just delete onnxruntime-gpu==1.16.0, because it is being used for other custom_nodes. By the way I have
onnx=1.14.0
onnxruntime=1.15.1
protobuf==3.20.3
Please, help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants