Intel Arc Graphics Thread #476
Replies: 27 comments 88 replies
-
UPDATE:
Before Intel update it, the code needs to be modified like this to avoid reporting errors: - match operation:
- case "multiply":
- output[top:bottom, left:right] = destination_portion * source_portion
- case "add":
- output[top:bottom, left:right] = destination_portion + source_portion
- case "subtract":
- output[top:bottom, left:right] = destination_portion - source_portion
+ if operation == "multiply":
+ output[top:bottom, left:right] = destination_portion * source_portion
+ elif operation == "add":
+ output[top:bottom, left:right] = destination_portion + source_portion
+ elif operation == "subtract":
+ output[top:bottom, left:right] = destination_portion - source_portion Of course not modifying it doesn't seem to affect normal usage. |
Beta Was this translation helpful? Give feedback.
-
About Karras scheduler: Currently using it causes a -996 "uses-fp64-math" error, which may be fixed in the next IPEX release (intel/intel-extension-for-pytorch#285) Until then, can specify to use # comfy/samplers.py
- sigmas = k_diffusion_sampling.get_sigmas_karras(n=steps, sigma_min=self.sigma_min, sigma_max=self.sigma_max, device=self.device)
+ sigmas = k_diffusion_sampling.get_sigmas_karras(n=steps, sigma_min=self.sigma_min, sigma_max=self.sigma_max, device="cpu") |
Beta Was this translation helpful? Give feedback.
-
is it Linux only or can you use this on a win10 machine? |
Beta Was this translation helpful? Give feedback.
-
So again, sharing the good news. An XPU version of their Pytorch extension with Pytorch 2.0 support, Hats off to ComfyUI for being the only Stable Diffusion UI to be able to do it at the moment but there are a bunch of caveats with running Arc and Stable Diffusion right now from the research I have done. As of the time of posting: 1.) Setup can still be complicated in some respects and can randomly not work because Intel is only really verifying for enterprise Linux distributions and Ubuntu. My base Linux Fedora 38 install has the RPMs installed but I get a cryptic error regarding the runtime saying 2.) From what I have read at intel/intel-extension-for-pytorch#398, there is no native Windows version for some wheels to get 3.) Intel Arc right now has a huge problem with allocating more than 4GB of VRAM in IPEX even though the card has more VRAM in the case of the A750/A770 according to intel/intel-extension-for-pytorch#325. This seems to have mitigations elsewhere in Intel's oneAPI stack like their OpenCL compute runtime where anything you compile, you can work your code to go around the restriction via some flags and changed code according to intel/compute-runtime#627. No workarounds, it seems, until someone gets Intel to fix it. 4.) Because of that, experimenting with other Stable Diffusion UIs, it seems like every once in a while, Arc will occasionally run out of VRAM if you decide to not use lower VRAM flags and throw a 5.) Intel has an equivalent to
I'm not even sure if I'm sure that Intel GPU support will not a big priority at the moment for the project, given the big blocker at this point is the 4GB VRAM limit which needs to be fixed by Intel. But that being said, things should be working a lot better than it is at the moment, maybe not average user ready, but should be ready for any mildly technical person. I really want to play with ComfyUI more but I really don't want to restart the application server for every several image I might want to generate even or rolling a dice for SDXL to actually finish a workflow. But the Arc cards are strong. I managed to equal the Nvidia RTX 3070 Ti in certain Stable Diffusion workflows with my 16 GB Intel Arc A770 so I look forward to the future when things are more mature and all the stars are aligned. Also not sure what issues need to be open here but there is potentially 2-3 of them that could be made from my report. Edit: Added a caveat I forgot to mention and filled in some information and fixed some typos. |
Beta Was this translation helpful? Give feedback.
-
So this took me a bit of time, but I have the Docker image I used published here so hopefully someone can find it useful. Some things to note I've found while poking and experimenting with things. 1.) A lot of the issues are gone mentioned before by @kwaa months ago like the Karras scheduler not working, where it is working now even with the new dpmpp_3 schedulers, or noise issues if not using split attention which is gone for the most part unless your graphics driver has crashed too many times and a restart of the computer fixes that. Anyways, hope people have good success with it like I did. I might try and see why ipexrun is failing but for now, I am going to take a break. Edit: Added in a caveat I forgot to mention and fixed some typos. |
Beta Was this translation helpful? Give feedback.
-
In the event anyone else missed it: I seem to be missing the file that gets sourced from /opt/intel, too... may edit this post once I figure it out. |
Beta Was this translation helpful? Give feedback.
-
Well, short story. After spend whole day to install Arch Linux on Windows 10 WSL (2 in my case). After all procedures, I stuck on
The installer reach a 87% and starts roll back all changes, because intel-oneapi-basekit not contain library libtbb.so.12 that needed to install oneAPI AI Analytics Toolkit v2023. The packgage that contains needed library is intel-oneapi-compiler-shared-runtime but it conflict with intel-oneapi-basekit. Because the istall procedure can't finish with intel-oneapi-basekit, I install intel-oneapi-compiler-shared-runtime. ComfyUI runs: [{user}@pc ArchLinux]$
But if i try to generate image I get an next error:
As can see, I tries params like --force-fp16 --bf16-vae --lowvram --use-split-cross-attention --highvram but it's have no effect. So, models are loaded, but KSampler are crashes. And that's my dead end, because I have no idea what's going on. Not in ComfyUI itself, and also there more in Linux Arch. But I suppose this can be a kind of report. |
Beta Was this translation helpful? Give feedback.
-
I don't use Windows, but all the pieces should be together to run ComfyUI on Windows now without any horrible downsides like no AOT compilation or missing packages since a new unofficial Intel Extension for Python package has been released without needing any installation of external dependencies and bundling it all together. It remains to be seen whether there will ever be an official package that does this so this package is the best chance anyone has at actually using Arc on Windows natively. According to reports, it is a bit faster than WSL2 but slower than Linux native. The rough steps to do this should be roughly the same as the process outlined in the opening post for Linux minus platform-specific things. 1.) Make sure you install an Intel driver that is 4952 or newer. The latest driver can be found here python -m ensurepip --upgrade 3.) Install git from here using the GUI installer or other means. cd <location where to place ComfyUI>
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI 5.) Download non-official Intel Extension for Pytorch packages here and install the packages with pip. Assuming they are all put in the root of the ComfyUI repository, run this command line. pip install intel_extension_for_pytorch-2.0.110+git0f2597b-cp310-cp310-win_amd64.whl torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl torchvision-0.15.2a0+fa99a53-cp310-cp310-win_amd64.whl Again, I will need to remind you this is unofficial and does have a degree of risk but it should be mostly safe. 6.) Finish the rest of the installation pip install -r requirements.txt At this point, you should be done with the installation. To run ComfyUI each time from scratch, open a terminal/command prompt. Then run the following command lines replacing <> with your own input: cd <location of ComfyUI>
python main.py <Any extra ComfyUI arguments you want to use> |
Beta Was this translation helpful? Give feedback.
-
Reinstall in forced mode, needed dll in place, but still get this error. Can it be relevant to folder rights or upper or lower letter case in "user"? |
Beta Was this translation helpful? Give feedback.
-
There really needs to be a better, more updated tutorial for this. Much of the information is stretched out over the thread and mixed up with information between Linux and Windows |
Beta Was this translation helpful? Give feedback.
-
To install on Ubuntu. 1.) Install Linux drivers with the following instructions provided by Intel here sudo apt install python3-pip git 3.) Install ComfyUI with the following terminal commands replacing the <> portion with a selection of your choice. cd <Location to put ComfyUI>
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI 4.) Install all the Intel Extension for Pytorch pip Python packages first with this terminal command. python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel-extension-for-pytorch==2.0.110+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ 5.) Finish the rest of the dependency installation with this terminal command: pip install -r requirements.txt Installation should be done at this point. To run ComfyUI, one can type in the following terminal commands replacing <> with your own input: cd <location of ComfyUI>
python main.py <Any extra ComfyUI arguments you want to use> For other Linux distros outside of Arch Linux or Ubuntu, one will need to manually install the Intel compute runtime, git, a Python with a version supported by Intel Extension for Pytorch like 3.10 as of the time of this writing or using Intel's Python from the AI Kit which will provide that, pip, and any other dependencies required according to your own Linux distribution's package manager or install scripts but one should be able to then just follow step 3 and onwards without any issue afterwards. |
Beta Was this translation helpful? Give feedback.
-
I've updated the Arch Linux setup guide for the latest IPEX and ComfyUI, without oneAPI AI Kit. If anyone wants to uninstall the previously installed AI Kit, it's available: cd /opt/intel/oneapi/installer
sudo ./installer --action remove --product-id intel.oneapi.lin.aikit.product --product-ver 2023.1.0-31760
# if you don't need
paru -Rsc libxcrypt-compat |
Beta Was this translation helpful? Give feedback.
-
New Patch!Thanks to vladmandic/automatic for the code. contributors Basically, you just need to copy Then modify try:
import intel_extension_for_pytorch as ipex
if torch.xpu.is_available():
xpu_available = True
+ from attention import attention_init
+ ok, e = attention_init()
except:
pass It could partially fix intel/intel-extension-for-pytorch#325. (so f**k you, intel) |
Beta Was this translation helpful? Give feedback.
-
Cool... Got it working with WSL2 using Arch. Followed the instructions at the top, as well as installed jemalloc paru(root/native environment), openmp via pip(venv). There are issues with the env vars as Arch is pulling my windows env vars through. So I some times have to re run the setvars.sh. I get warnings about libpng when I run via python or ipexrun. Everything still works though. I generally run through ipexrun but add xpu to the command i.e.
Apart from some memory overruns I've had no issues the last few days. |
Beta Was this translation helpful? Give feedback.
-
anybody can tell where intel store their patches for pytorch? |
Beta Was this translation helpful? Give feedback.
-
new url for downloading extension also i guess new release soon ^_^ i've managed to build on archlinux pytorch with torch patches, but intel-extension-for-pytorch failed a lot with different errors ^_^ |
Beta Was this translation helpful? Give feedback.
-
Getting this error upon installing using "pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu", using WSL on Windows with Ubuntu: ERROR: Could not find a version that satisfies the requirement torch==2.0.1a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1) |
Beta Was this translation helpful? Give feedback.
-
Intel® Extension for PyTorch* v2.1.10+xpu Release Notes Latest intel_extension_for_pytorch-2.1.10+xpu |
Beta Was this translation helpful? Give feedback.
-
Howdy all! Trying to install pytorch to use my ARC a770 with comfyui, but no matter what version I download/install and what args I use for main.py I come up against a "Torch not compiled with CUDA enabled" error. My set up is Fedora 39 with python 3.10.3 used in the venv. Installed torch with the command:
Both the python and ipex commands listed above:
or
produce the following outcome:
The assertion is correct. I checked and don't have CUDA support in place:
Comfy works in CPU-only mode, but it's stupid slow. There doesn't seem to be an ignore CUDA option for comfy. What am I missing here? |
Beta Was this translation helpful? Give feedback.
-
Here is an alternative approach for trying to get Arc working. Turn your Arc into fake CUDA:
This time we rip off basically the whole CUDA emulation layer is SD.next: First clone the SD.next repo and switch to the Apply this patch: diff --git a/modules/errors.py b/modules/errors.py.bak
similarity index 100%
rename from modules/errors.py
rename to modules/errors.py.bak
diff --git a/modules/intel/ipex/hijacks.py b/modules/intel/ipex/hijacks.py
index 0dbec37393c..fdd18ff6d90 100644
--- a/modules/intel/ipex/hijacks.py
+++ b/modules/intel/ipex/hijacks.py
@@ -1,7 +1,11 @@
import contextlib
import torch
import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
-from modules import devices
+#from modules import devices
+
+class devices:
+ device = torch.device('xpu')
+ dtype = torch.bfloat16
# pylint: disable=protected-access, missing-function-docstring, line-too-long, unnecessary-lambda, no-else-return
@@ -67,6 +71,7 @@ def from_numpy(ndarray):
except Exception: # pylint: disable=broad-exception-caught
original_torch_bmm = torch.bmm
original_scaled_dot_product_attention = torch.nn.functional.scaled_dot_product_attention
+ raise
# Data Type Errors: Alternatively you can clone my fork with the required hacks: https://github.com/KerfuffleV2/sdnext/tree/comfyhacks — note: I probably won't keep this updated so patching it yourself is probably going to be more reliable. Symlink the Apply this patch to ComfyUI: diff --git a/comfy/model_management.py b/comfy/model_management.py
index fefd3c8..831ce49 100644
--- a/comfy/model_management.py
+++ b/comfy/model_management.py
@@ -45,12 +45,9 @@ if args.directml is not None:
# torch_directml.disable_tiled_resources(True)
lowvram_available = False #TODO: need to find a way to get free memory in directml before this can be enabled by default.
-try:
- import intel_extension_for_pytorch as ipex
- if torch.xpu.is_available():
- xpu_available = True
-except:
- pass
+import modules.intel.ipex as _ipex
+print('** IPEX init',_ipex.ipex_init())
+# xpu_available = True
try:
if torch.backends.mps.is_available():
@@ -318,7 +315,7 @@ class LoadedModel:
self.model_accelerated = True
- if is_intel_xpu() and not args.disable_ipex_optimize:
+ if True and not args.disable_ipex_optimize:
self.real_model = torch.xpu.optimize(self.real_model.eval(), inplace=True, auto_kernel_selection=True, graph_mode=True)
return self.real_model
@@ -639,6 +636,7 @@ def pytorch_attention_enabled():
return ENABLE_PYTORCH_ATTENTION
def pytorch_attention_flash_attention():
+ return False
global ENABLE_PYTORCH_ATTENTION
if ENABLE_PYTORCH_ATTENTION:
#TODO: more reliable way of checking for flash attention?
diff --git a/comfy/utils.py b/comfy/utils.py
index f8026dd..05bdd06 100644
--- a/comfy/utils.py
+++ b/comfy/utils.py
@@ -13,7 +13,7 @@ def load_torch_file(ckpt, safe_load=False, device=None):
sd = safetensors.torch.load_file(ckpt, device=device.type)
else:
if safe_load:
- if not 'weights_only' in torch.load.__code__.co_varnames:
+ if not 'weights_only' in torch.load.__code__.co_varnames and False:
print("Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.")
safe_load = False
if safe_load: (If you knew how long it took me to track down the flash attention thing... Without that part, you'll get random corruption when sampling but only under some conditions.) With these changes it shouldn't be necessary to do stuff like pull in the Intel/IPEX variables or set MKL stuff, I was able to remove those parts from my startup script. The sliced attention from SD.next has two tuneables you can access via environment variable: sdpa_slice_trigger_rate = float(os.environ.get('IPEX_SDPA_SLICE_TRIGGER_RATE', 6))
attention_slice_rate = float(os.environ.get('IPEX_ATTENTION_SLICE_RATE', 4)) I had to set So far this seems to be working for me at least as well as the previous way with the latest release of the Intel Torch stuff. I set the fallback types in For reference, here's my startup script: #!/bin/bash
source .venv/bin/activate
export IPEX_ATTENTION_SLICE_RATE=3
nice -n 10 ipexrun xpu main.py --preview-method taesd --use-pytorch-cross-attention --disable-xformers --bf16-vae --bf16-unet "$@" |
Beta Was this translation helpful? Give feedback.
-
I'm trying to setup ComfyUI on Debian box with Intel ARC A770 16GB and I found something that I believe could be clarified better in the first post. One of the prerequisites is installing Second thing - that package is absurdly huge. Over 13GB after install, ... Jesus Christ. Do we actually know what parts of the stack SD and ComfyUI uses? Maybe the install size could be a little less. At least on Debian we have the ability to install/remove individual packages from that basekit. |
Beta Was this translation helpful? Give feedback.
-
Commenting here because this thread and other available sources was somewhat difficult to follow. On a fresh ubuntu 23 install i did the following git clone comfy
run comfy with this makes comfy run. Atleast i was able to use sdxl and sdxl turbo, control net and some depth map and canny nodes. Then also look at the top of this page and do the 4G patch |
Beta Was this translation helpful? Give feedback.
-
Okay so I am running in WSL, comfy is installed and runs as intended, seems to be loading models into vram. But the issue is as follows:
|
Beta Was this translation helpful? Give feedback.
-
Just wanted to let everyone know. There is a new version of IPEX for XPU, v2.1.20+xpu, released yesterday. |
Beta Was this translation helpful? Give feedback.
-
I skimmed this entire thread but the information here feels very disjointed and outdated. I have Windows 10. Will Intel Arc work for me with Comfy UI? I used to use SD1111 when I had my RTX 1080, then I switched to Forge, but when I got the Intel Arc I ended up switching to Vlad's fork. The problem is his fork is a mess of UI and it doesn't support Clip Skip 2 for Arc (I have no idea why) so I'm debating trying out Comfy even though I know nothing about it. So what's the verdict? Can Comfy work well on Windows 10 with an A770? |
Beta Was this translation helpful? Give feedback.
-
In an unusual but welcome move, Intel released a version of Intel Extension for Pytorch for XPU, v2.1.30+xpu, that has the fixes needed to run ComfyUI on Linux and Windows without code modifications, from what wheels I could see in the repository. I was able to verify SD 1.5 is working, but SDXL is being a bit wonky on my end so I need some further time to test it out and see what is up. Do note as well that Linux and the Intel Compute Runtime currently as of writing this has some outstanding issues that are being worked through with the latest versions like this which may be related to my issue so tread carefully. If anyone can verify that this can run ComfyUI without any changes, that would be great so I can move forward with creating a pull request for some code changes I have been meaning to merge into the project. Edit: I verified that SDXL is working for the most part but it is pretty unstable without VRAM reduction built in for higher resolution images and I am getting hit by the slower speed Intel generation speeds by the bug I mentioned above so it's best to stay on a kernel at 6.6.25 LTS/6.8.4 or lower if possible to avoid getting slowdowns at this time. Might change in the future. |
Beta Was this translation helpful? Give feedback.
-
This seems to be the last working recipe for me (Arch Linux): #476 (comment) Since that posting the situation devolved significantly from running ComfyUI out of the box:
Am I holding it wrong or is that what it is at the moment? |
Beta Was this translation helpful? Give feedback.
-
ComfyUI now supports Intel Arc Graphics. (#409)
Since the installation tutorial for Intel Arc Graphics is quite long, I'll write it here first.
Intel Extension for PyTorch is currently only available for Linux, so you will need to have a Linux or WSL environment.
Arch Linux (with
paru
) are used here as example operating systems.Install Python and PIP:
Install Intel Compute Runtime and Intel oneAPI Base Kit:
Install ComfyUI:
Install Dependencies (via
venv
):python -m venv venv source venv/bin/activate pip install torch==2.0.1a0 torchvision==0.15.2a0 intel-extension-for-pytorch==2.0.120+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl-aitools/ pip install -r requirements.txt
For the second start and beyond, venv needs to be reactivated:
source venv/bin/activate
Set oneAPI vars:
source /opt/intel/oneapi/setvars.sh
Running ComfyUI (via
python
):Running ComfyUI (via
ipexrun
):Beta Was this translation helpful? Give feedback.
All reactions