Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] 01-faceswap.json - not sure, could be bug, but something (data) is missing. #40

Closed
eTQMA opened this issue Jul 30, 2023 · 3 comments
Labels
type: 🐛 bug Something isn't working

Comments

@eTQMA
Copy link

eTQMA commented Jul 30, 2023

Describe the bug

Hi dev I admire your work, and I have question (faceswap) about missing datas with screenshots:

Everything was cleanly installed, and I open your 01-faceswap.json.. As you can see in screenshot (1):
image

And so, I went to test for the first time by clicking on queue prompt, and it was not successful.
Because two shapes are missing? See screenshot (2):
image

First shape
image
Btw I have the file inswapper_128.onnx', but I can't find a folder to put the file in?
image

Second shape
image
I also have that file GFPGANv1.4.pth, but have no idea where to find the folder.
image

Check please console output, thanks.

I hope you can help me figure out where to move 2 important files so that the red shapes are no longer visible and your 01-faceswap.json can function optimally. I've been searching for it for 4 days but couldn't resolve it. Re thank you very much and greetings!

TQMA

Reproduction

No response

Expected behavior

No response

Platform and versions

Windows 10
ComfyUI + ComfyUI-Manager

Console output

C:\AI\ComfyUI\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build

Prestartup times for custom nodes:
   0.0 seconds: C:\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 8192 MB, total RAM 32604 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync
Using xformers cross attention
Adding extra search path checkpoints C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/Stable-diffusion
Adding extra search path configs C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/Stable-diffusion
Adding extra search path vae C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/VAE
Adding extra search path loras C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/Lora
Adding extra search path loras C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/LyCORIS
Adding extra search path upscale_models C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/ESRGAN
Adding extra search path upscale_models C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/RealESRGAN
Adding extra search path upscale_models C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/SwinIR
Adding extra search path embeddings C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\embeddings
Adding extra search path hypernetworks C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/hypernetworks
Adding extra search path controlnet C:\AI\stable-diffusion\stable-diffusi0n-webui\stable-diffusion-webui\models/ControlNet
### Loading: ComfyUI-Manager (V0.17.1)
### ComfyUI Revision: 1221 [95d796fc]
Log level: 20
[comfy_mtb] | WARNING -> Web extensions folder at C:\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\web\extensions\mtb is not a symlink, if updating please delete it before
C:\AI\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(
ImageRemoveBackgroundRembg
[comfy_mtb] | INFO -> Some nodes failed to load:
        Failed to import module mask because ModuleNotFoundError: No module named 'rembg'

Check that you properly installed the dependencies.
If you think this is a bug, please report it on the github page (https://github.com/melMass/comfy_mtb/issues)
[comfy_mtb] | INFO -> Loaded the following nodes:
        Animation Builder (mtb): Convenient way to manage basic animation maths at the core of many of my workflows
        Smart Step (mtb): Utils to control the steps start/stop of the KAdvancedSampler in percentage
        Text To Image (mtb): Utils to convert text to image using a font
        Styles Loader (mtb): Load csv files and populate a dropdown from the rows (à la A111)
        Bbox From Mask (mtb): From a mask extract the bounding box
        Bbox (mtb): The bounding box (BBOX) custom type used by other nodes
        Crop (mtb): Crops an image and an optional mask to a given bounding box
        Uncrop (mtb): Uncrops an image to a given bounding box
        Debug (mtb): Experimental node to debug any Comfy values, support for more types and widgets is planned
        Save Tensors (mtb): Save torch tensors (image, mask or latent) to disk, useful to debug things outside comfy
        Deep Bump (mtb): Normal & height maps generation from single pictures
        Restore Face (mtb): Uses GFPGan to restore faces
        Load Face Enhance Model (mtb): Loads a GFPGan or RestoreFormer model for face enhancement.
        Face Swap (mtb): Face swap using deepinsight/insightface models        Load Face Swap Model (mtb): Loads a faceswap model
        Load Face Analysis Model (mtb): Loads a face analysis model
        Qr Code (mtb): Basic QR Code generator
        Unsplash Image (mtb): Unsplash Image given a keyword and a size        String Replace (mtb): Basic string replacement
        Fit Number (mtb): Fit the input float using a source and target range
        Load Film Model (mtb): Loads a FILM model
        Film Interpolation (mtb): Google Research FILM frame interpolation for large motion
        Concat Images (mtb): Add images to batch
        Get Batch From History (mtb): Very experimental node to load images from the history of the server.
        Color Correct (mtb): Various color correction methods
        Image Compare (mtb): Compare two images and return a difference image
        Blur (mtb): Blur an image using a Gaussian filter.
        Mask To Image (mtb): Converts a mask (alpha) to an RGB image with a color and background
        Colored Image (mtb): Constant color image of given size
        Image Premultiply (mtb): Premultiply image with mask
        Image Resize Factor (mtb): Extracted mostly from WAS Node Suite, with a few edits (most notably multiple image support) and less features.
        Save Image Grid (mtb): Save all the images in the input batch as a grid of images.
        Load Image From Url (mtb): Load an image from the given URL
        Save Gif (mtb): Save the images from the batch as a GIF
        Export To Prores (mtb): Export to ProRes 4444 (Experimental)
        Latent Lerp (mtb): Linear interpolation (blend) between two latent vectors
        Float To Number (mtb): Node addon for the WAS Suite. Converts a "comfy" FLOAT to a NUMBER.
        Int To Bool (mtb): Basic int to bool conversion
        Int To Number (mtb): Node addon for the WAS Suite. Converts a "comfy" INT to a NUMBER.
        Transform Image (mtb): Save torch tensors (image, mask or latent) to disk, useful to debug things outside comfy
        Load Image Sequence (mtb): Load an image sequence from a folder. The current frame is used to determine which image to load.
        Save Image Sequence (mtb): Save an image sequence to a folder. The current frame is used to determine which image to save.

Import times for custom nodes:
   0.0 seconds: C:\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
   0.4 seconds: C:\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
   1.0 seconds: C:\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy_mtb

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
Failed to validate prompt for output 75:
* Load Face Swap Model (mtb) 69:
  - Value not in list: faceswap_model: 'inswapper_128.onnx' not in []
Output will be ignored
Failed to validate prompt for output 25:
* Load Face Enhance Model (mtb) 71:
  - Value not in list: model_name: 'GFPGANv1.4.pth' not in []
Output will be ignored
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
model_type EPS
adm 2816
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280
  0%|                                          | 0/45 [00:00<?, ?it/s]C:\AI\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torchsde\_brownian\brownian_interval.py:594: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614642.
  warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
100%|█████████████████████████████████| 45/45 [00:22<00:00,  2.02it/s]
Prompt executed in 34.34 seconds

Additional context

No response

@eTQMA eTQMA added status: 🧹 needs triage This issue needs to triage, applied to new issues type: 🐛 bug Something isn't working labels Jul 30, 2023
@melMass
Copy link
Owner

melMass commented Jul 30, 2023

Thanks for the great report, for the face restore ones it's actually an error I made and forgot to fix!
I will later today or early next week, but in the meantime you can put the 3 face_restore models in models/upscale_models.

About the faceswap one it's strange that it did not go in models/insightface directly, I will check why, but same in the meantime you can put it there

@melMass melMass removed the status: 🧹 needs triage This issue needs to triage, applied to new issues label Jul 30, 2023
@eTQMA
Copy link
Author

eTQMA commented Jul 30, 2023

you can put the 3 face_restore models in models/upscale_models & About the faceswap one it's strange that it did not go in models/insightface directly

It works!! (Models/upscale_models & models/insightface) Thanks!!

image

I got new issue of "Weight value" of 0.5 (your standard), see yellow circles
EDIT: I believe problem (yellow circles) from weight value? I may be mistaken

Thanks for the great report, for the face restore ones it's actually an error I made and forgot to fix!

Good, no problem (to fix) thanks ❤️🙏

@melMass
Copy link
Owner

melMass commented Jul 30, 2023

It works!! (Models/upscale_models & models/insightface) Thanks!!

Great! I will fix it in the next release to avoid that need without breaking from the old behavior for now

I got new issue of "Weight value" of 0.5 (your standard), see yellow circles EDIT: I believe problem (yellow circles) from weight value? I may be mistaken

Yep I do have those too unfortunately, more present in some compositions then others, check my test on video here: #19 (comment)

The weight is not related though, that's a property that should be used by gfpgan but after I tracked down its use in GFPGan's source code I saw that it was never actually used (I might need to recheck that was a few weeks back it's not too fresh)
It should affect how much "face restoration" should be applied overall. I think A111 exposes it and there it works so I'm maybe wrong there, I will check.

That said if you look at the "official" example you can see the issue present there too:

https://replicate.com/okaris/roop

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: 🐛 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants