D:\goodjob\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram xformers version: 0.0.19 Set vram state to: LOW_VRAM Using xformers cross attention Failed to auto update `Quality of Life Suit` QualityOfLifeSuit_Omar92_DIR: D:\goodjob\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-QualityOfLifeSuit_Omar92-dev\src\.. Starting server To see the GUI go to: http://127.0.0.1:8188 Error: OpenAI API key is invalid OpenAI features wont work for you QualityOfLifeSuit_Omar92::NSP ready got prompt making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... Working with z of shape (1, 4, 32, 32) = 4096 dimensions. making attention of type 'vanilla-xformers' with 512 in_channels building MemoryEfficientAttnBlock with 512 in_channels... LatentDiffusion: Running in eps-prediction mode Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. DiffusionWrapper has 859.52 M params. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads. Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads. torch.Size([1, 3, 768, 512]) 0%| | 0/20 [00:00