D:\Focus>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y Found existing installation: torch 2.0.0 Uninstalling torch-2.0.0: Successfully uninstalled torch-2.0.0 Found existing installation: torchvision 0.15.1 Uninstalling torchvision-0.15.1: Successfully uninstalled torchvision-0.15.1 WARNING: Skipping torchaudio as it is not installed. WARNING: Skipping torchtext as it is not installed. WARNING: Skipping functorch as it is not installed. WARNING: Skipping xformers as it is not installed. D:\Focus>.\python_embeded\python.exe -m pip install torch-directml Requirement already satisfied: torch-directml in d:\focus\python_embeded\lib\site-packages (0.2.0.dev230426) Collecting torch==2.0.0 (from torch-directml) Using cached torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB) Collecting torchvision==0.15.1 (from torch-directml) Using cached torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB) Requirement already satisfied: filelock in d:\focus\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (3.12.2) Requirement already satisfied: typing-extensions in d:\focus\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (4.7.1) Requirement already satisfied: sympy in d:\focus\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (1.12) Requirement already satisfied: networkx in d:\focus\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (3.1) Requirement already satisfied: jinja2 in d:\focus\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (3.1.2) Requirement already satisfied: numpy in d:\focus\python_embeded\lib\site-packages (from torchvision==0.15.1->torch-directml) (1.23.5) Requirement already satisfied: requests in d:\focus\python_embeded\lib\site-packages (from torchvision==0.15.1->torch-directml) (2.31.0) Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\focus\python_embeded\lib\site-packages (from torchvision==0.15.1->torch-directml) (9.2.0) Requirement already satisfied: MarkupSafe>=2.0 in d:\focus\python_embeded\lib\site-packages (from jinja2->torch==2.0.0->torch-directml) (2.1.3) Requirement already satisfied: charset-normalizer<4,>=2 in d:\focus\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in d:\focus\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (3.4) Requirement already satisfied: urllib3<3,>=1.21.1 in d:\focus\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (2.0.3) Requirement already satisfied: certifi>=2017.4.17 in d:\focus\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (2023.5.7) Requirement already satisfied: mpmath>=0.19 in d:\focus\python_embeded\lib\site-packages (from sympy->torch==2.0.0->torch-directml) (1.3.0) DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 Installing collected packages: torch, torchvision WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\Focus\python_embeded\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed torch-2.0.0 torchvision-0.15.1 D:\Focus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml Already up-to-date Update succeeded. Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.37 Inference Engine exists and URL is correct. Inference Engine checkout finished for d1a0abd40b86f3f079b0cc71e49f9f4604831457. Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Using directml with device: Total VRAM 1024 MB, total RAM 32711 MB Set vram state to: NORMAL_VRAM Device: privateuseone VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention model_type EPS adm 2560 Refiner model loaded: D:\Focus\Fooocus\models\checkpoints\sd_xl_refiner_1.0_0.9vae.safetensors model_type EPS adm 2816 making attention of type 'vanilla' with 512 in_channels Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla' with 512 in_channels missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'} Base model loaded: D:\Focus\Fooocus\models\checkpoints\sd_xl_base_1.0_0.9vae.safetensors LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)] Fooocus Expansion engine loaded for privateuseone:0, use_fp16 = False. loading new App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 7.0 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 20 [Fooocus] Initializing ... [Fooocus] Loading models ... [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text #1 ... D:\Focus\python_embeded\lib\site-packages\transformers\generation\utils.py:723: UserWarning: The operator 'aten::repeat_interleave.Tensor' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at D:\a\_work\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) input_ids = input_ids.repeat_interleave(expand_size, dim=0) [Prompt Expansion] New suffix: intricate, elegant, highly detailed, digital painting, artstation, concept art, matte, sharp focus, illustration, art by Artgerm and Greg Rutkowski and Alphonse Mucha [Fooocus] Preparing Fooocus text #2 ... [Prompt Expansion] New suffix: extremely detailed, sharp focus wore [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... Preparation time: 3.22 seconds loading new Moving model to GPU: 16.36 seconds Traceback (most recent call last): File "D:\Focus\Fooocus\modules\async_worker.py", line 565, in worker handler(task) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\Fooocus\modules\async_worker.py", line 499, in handler imgs = pipeline.process_diffusion( File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\Fooocus\modules\default_pipeline.py", line 245, in process_diffusion sampled_latent = core.ksampler( File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\Fooocus\modules\core.py", line 270, in ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\sample.py", line 97, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 785, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\Fooocus\modules\sample_hijack.py", line 105, in sample_hacked samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar) File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\samplers.py", line 630, in sample samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options) File "D:\Focus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 700, in sample_dpmpp_2m_sde_gpu noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=extra_args.get("seed", None), cpu=False) if noise_sampler is None else noise_sampler File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 119, in __init__ self.tree = BatchedBrownianTree(x, t0, t1, seed, cpu=cpu) File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 85, in __init__ self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed] File "D:\Focus\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\k_diffusion\sampling.py", line 85, in self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed] File "D:\Focus\python_embeded\lib\site-packages\torchsde\_brownian\derived.py", line 155, in __init__ self._interval = brownian_interval.BrownianInterval(t0=t0, File "D:\Focus\python_embeded\lib\site-packages\torchsde\_brownian\brownian_interval.py", line 540, in __init__ W = self._randn(initial_W_seed) * math.sqrt(t1 - t0) File "D:\Focus\python_embeded\lib\site-packages\torchsde\_brownian\brownian_interval.py", line 234, in _randn return _randn(size, self._top._dtype, self._top._device, seed) File "D:\Focus\python_embeded\lib\site-packages\torchsde\_brownian\brownian_interval.py", line 32, in _randn generator = torch.Generator(device).manual_seed(int(seed)) RuntimeError: Device type privateuseone is not supported for torch.Generator() api. Total time: 265.69 seconds