Skip to content

DualCLIPLoader Allocation on device / hunyuan #6222

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
steve84wien opened this issue Dec 26, 2024 · 4 comments
Closed

DualCLIPLoader Allocation on device / hunyuan #6222

steve84wien opened this issue Dec 26, 2024 · 4 comments
Labels
Potential Bug User is reporting a bug. This should be tested.

Comments

@steve84wien
Copy link

Expected Behavior

Loading the two text encoders (it worked a few days ago, maybe some update broke it)

Actual Behavior

OOM

Steps to Reproduce

I am using the standardworkflow for Hunyuan

Debug Logs

# ComfyUI Error Report
## Error Details
- **Node ID:** 11
- **Node Type:** DualCLIPLoader
- **Exception Type:** torch.OutOfMemoryError
- **Exception Message:** Allocation on device 
## Stack Trace

  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 966, in load_clip
    clip = comfy.sd.load_clip(ckpt_paths=[clip_path1, clip_path2], embedding_directory=folder_paths.get_folder_paths("embeddings"), clip_type=clip_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 633, in load_clip
    return load_text_encoder_state_dicts(clip_data, embedding_directory=embedding_directory, clip_type=clip_type, model_options=model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 766, in load_text_encoder_state_dicts
    clip = CLIP(clip_target, embedding_directory=embedding_directory, parameters=parameters, tokenizer_data=tokenizer_data, model_options=model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 96, in __init__
    self.cond_stage_model = clip(**(params))
                            ^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\hunyuan_video.py", line 111, in __init__
    super().__init__(dtype_llama=dtype_llama, device=device, dtype=dtype, model_options=model_options)

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\hunyuan_video.py", line 65, in __init__
    self.llama = LLAMAModel(device=device, dtype=dtype_llama, model_options=model_options)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\hunyuan_video.py", line 34, in __init__
    super().__init__(device=device, layer=layer, layer_idx=layer_idx, textmodel_json_config={}, dtype=dtype, special_tokens={"start": 128000, "pad": 128258}, layer_norm_hidden_state=False, model_class=comfy.text_encoders.llama.Llama2, enable_attention_masks=attention_mask, return_attention_masks=attention_mask, model_options=model_options)

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 114, in __init__
    self.transformer = model_class(config, dtype, device, self.operations)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 216, in __init__
    self.model = Llama2_(config, device=device, dtype=dtype, ops=operations)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 162, in __init__
    TransformerBlock(config, device=device, dtype=dtype, ops=ops)

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 119, in __init__
    self.mlp = MLP(config, device=device, dtype=dtype, ops=ops)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 108, in __init__
    self.gate_proj = ops.Linear(config.hidden_size, config.intermediate_size, bias=False, device=device, dtype=dtype)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 106, in __init__
    torch.empty((out_features, in_features), **factory_kwargs)

System Information

  • ComfyUI Version: v0.3.9-18-gbc6dac4
  • Arguments: ComfyUI\main.py --windows-standalone-build
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3090 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25756696576
    • VRAM Free: 24405606400
    • Torch VRAM Total: 0
    • Torch VRAM Free: 0

Logs

2024-12-26T01:24:13.251885 - [START] Security scan2024-12-26T01:24:13.251885 - 
2024-12-26T01:24:14.651888 - [DONE] Security scan2024-12-26T01:24:14.651888 - 
2024-12-26T01:24:14.797182 - ## ComfyUI-Manager: installing dependencies done.2024-12-26T01:24:14.797182 - 
2024-12-26T01:24:14.797182 - ** ComfyUI startup time:2024-12-26T01:24:14.797182 -  2024-12-26T01:24:14.797182 - 2024-12-26 01:24:14.7971822024-12-26T01:24:14.797182 - 
2024-12-26T01:24:14.818525 - ** Platform:2024-12-26T01:24:14.818525 -  2024-12-26T01:24:14.818525 - Windows2024-12-26T01:24:14.818525 - 
2024-12-26T01:24:14.818525 - ** Python version:2024-12-26T01:24:14.818525 -  2024-12-26T01:24:14.818525 - 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]2024-12-26T01:24:14.818525 - 
2024-12-26T01:24:14.818525 - ** Python executable:2024-12-26T01:24:14.818525 -  2024-12-26T01:24:14.818525 - F:\ComfyUI_windows_portable\python_embeded\python.exe2024-12-26T01:24:14.818525 - 
2024-12-26T01:24:14.818525 - ** ComfyUI Path:2024-12-26T01:24:14.818525 -  2024-12-26T01:24:14.818525 - F:\ComfyUI_windows_portable\ComfyUI2024-12-26T01:24:14.818525 - 
2024-12-26T01:24:14.818525 - ** Log path:2024-12-26T01:24:14.818525 -  2024-12-26T01:24:14.818525 - F:\ComfyUI_windows_portable\comfyui.log2024-12-26T01:24:14.818525 - 
2024-12-26T01:24:16.150839 - 
Prestartup times for custom nodes:
2024-12-26T01:24:16.150839 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2024-12-26T01:24:16.150839 -    2.9 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-26T01:24:16.150839 - 
2024-12-26T01:24:20.869024 - Total VRAM 24564 MB, total RAM 32706 MB
2024-12-26T01:24:20.869024 - pytorch version: 2.5.1+cu124
2024-12-26T01:24:20.870021 - Set vram state to: NORMAL_VRAM
2024-12-26T01:24:20.870021 - Device: cuda:0 NVIDIA GeForce RTX 3090 Ti : cudaMallocAsync
2024-12-26T01:24:22.337016 - Using pytorch attention
2024-12-26T01:24:24.426588 - [Prompt Server] web root: F:\ComfyUI_windows_portable\ComfyUI\web
2024-12-26T01:24:25.204622 - ### Loading: ComfyUI-Manager (V2.55.5)2024-12-26T01:24:25.204622 - 
2024-12-26T01:24:25.404475 - ### ComfyUI Version: v0.3.9-18-gbc6dac4 | Released on '2024-12-23'2024-12-26T01:24:25.405472 - 
2024-12-26T01:24:25.493667 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-26T01:24:25.493667 - 
2024-12-26T01:24:25.504637 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-26T01:24:25.504637 - 
2024-12-26T01:24:25.539240 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-26T01:24:25.539240 - 
2024-12-26T01:24:25.557621 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-26T01:24:25.557621 - 
2024-12-26T01:24:25.597594 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-26T01:24:25.598592 - 
2024-12-26T01:24:25.802489 - Total VRAM 24564 MB, total RAM 32706 MB
2024-12-26T01:24:25.802489 - pytorch version: 2.5.1+cu124
2024-12-26T01:24:25.803486 - Set vram state to: NORMAL_VRAM
2024-12-26T01:24:25.803486 - Device: cuda:0 NVIDIA GeForce RTX 3090 Ti : cudaMallocAsync
2024-12-26T01:24:25.886828 - 
2024-12-26T01:24:25.886828 - �[92m[rgthree-comfy] Loaded 42 epic nodes. 🎉�[00m2024-12-26T01:24:25.886828 - 
2024-12-26T01:24:25.887826 - 
2024-12-26T01:24:26.373606 - 
Import times for custom nodes:
2024-12-26T01:24:26.373606 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-26T01:24:26.373606 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-various
2024-12-26T01:24:26.373606 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-12-26T01:24:26.374603 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-FFmpeg
2024-12-26T01:24:26.374603 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
2024-12-26T01:24:26.374603 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
2024-12-26T01:24:26.374603 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials
2024-12-26T01:24:26.374603 -    0.0 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes
2024-12-26T01:24:26.375601 -    0.1 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SeqImageLoader
2024-12-26T01:24:26.375601 -    0.2 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-26T01:24:26.375601 -    0.4 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2024-12-26T01:24:26.375601 -    0.4 seconds: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper
2024-12-26T01:24:26.375601 - 
2024-12-26T01:24:26.415494 - Starting server

2024-12-26T01:24:26.415494 - To see the GUI go to: http://127.0.0.1:8188
2024-12-26T01:24:27.278704 - FETCH DATA from: F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-26T01:24:27.278704 - 2024-12-26T01:24:27.284221 -  [DONE]2024-12-26T01:24:27.285219 - 
2024-12-26T01:24:31.881562 - got prompt
2024-12-26T01:24:31.902954 - Failed to validate prompt for output 80:
2024-12-26T01:24:31.902954 - * UNETLoader 12:
2024-12-26T01:24:31.902954 -   - Value not in list: unet_name: 'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors' not in ['flux_WS39_distilled\\Flux_WS39_distilled-step00001200.safetensors', 'flux_WS39_distilled\\Flux_WS39_distilled-step00001200_fp8.safetensors', 'hunyuan\\hunyuan_official_base_fp8.pt', 'hunyuan\\hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'hunyuan\\hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors', 'hunyuan\\hunyuan_video_t2v_720p_bf16.safetensors']
2024-12-26T01:24:31.902954 - * LoraLoaderModelOnly 79:
2024-12-26T01:24:31.902954 -   - Value not in list: lora_name: 'hunyuan\hunyuanvideo_WS40_337steps_rank32.pt' not in ['flux\\NEW_amateurphoto-v6-forcu.safetensors', 'flux\\UltraRealPhoto.safetensors', 'flux\\boFLUX Double Exposure Magic v2.safetensors', 'flux\\flux1-canny-dev-lora.safetensors', 'hunyuan\\HunyuanVideo - Arnold Schwarzenegger LoRA - Trigger is Ohwx-Person.safetensors', 'hunyuan\\hunyuan_Step2000_WS40.safetensors', 'hunyuan\\hunyuan_step1000_WS40.safetensors', 'hunyuan\\hunyuan_step1200_WS40.safetensors', 'hunyuan\\hunyuan_step1400_WS40.safetensors', 'hunyuan\\hunyuan_step1600_WS40.safetensors', 'hunyuan\\hunyuan_step1800_WS40.safetensors', 'hunyuan\\hunyuan_step200_WS40.safetensors', 'hunyuan\\hunyuan_step400_WS40.safetensors', 'hunyuan\\hunyuan_step600_WS40.safetensors', 'hunyuan\\hunyuan_step800_WS40.safetensors', 'hunyuan\\hyvideo_FastVideo_LoRA-fp8.safetensors']
2024-12-26T01:24:31.902954 - * DualCLIPLoader 11:
2024-12-26T01:24:31.902954 -   - Value not in list: clip_name2: 'llava_llama3_fp8_scaled.safetensors' not in ['clip_g.safetensors', 'clip_l.safetensors', 'hunyuan\\clip_l_hunyuan.safetensors', 'hunyuan\\llava_llama3_fp8_scaled.safetensors']
2024-12-26T01:24:31.902954 - Output will be ignored
2024-12-26T01:24:31.902954 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-26T01:24:51.087109 - got prompt
2024-12-26T01:24:51.164418 - Using pytorch attention in VAE
2024-12-26T01:24:51.166413 - Using pytorch attention in VAE
2024-12-26T01:24:53.192914 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-12-26T01:24:53.207874 - model_type FLOW
2024-12-26T01:25:31.386955 - !!! Exception during processing !!! Allocation on device 
2024-12-26T01:25:31.412895 - Traceback (most recent call last):
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 966, in load_clip
    clip = comfy.sd.load_clip(ckpt_paths=[clip_path1, clip_path2], embedding_directory=folder_paths.get_folder_paths("embeddings"), clip_type=clip_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 633, in load_clip
    return load_text_encoder_state_dicts(clip_data, embedding_directory=embedding_directory, clip_type=clip_type, model_options=model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 766, in load_text_encoder_state_dicts
    clip = CLIP(clip_target, embedding_directory=embedding_directory, parameters=parameters, tokenizer_data=tokenizer_data, model_options=model_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 96, in __init__
    self.cond_stage_model = clip(**(params))
                            ^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\hunyuan_video.py", line 111, in __init__
    super().__init__(dtype_llama=dtype_llama, device=device, dtype=dtype, model_options=model_options)
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\hunyuan_video.py", line 65, in __init__
    self.llama = LLAMAModel(device=device, dtype=dtype_llama, model_options=model_options)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\hunyuan_video.py", line 34, in __init__
    super().__init__(device=device, layer=layer, layer_idx=layer_idx, textmodel_json_config={}, dtype=dtype, special_tokens={"start": 128000, "pad": 128258}, layer_norm_hidden_state=False, model_class=comfy.text_encoders.llama.Llama2, enable_attention_masks=attention_mask, return_attention_masks=attention_mask, model_options=model_options)
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 114, in __init__
    self.transformer = model_class(config, dtype, device, self.operations)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 216, in __init__
    self.model = Llama2_(config, device=device, dtype=dtype, ops=operations)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 162, in __init__
    TransformerBlock(config, device=device, dtype=dtype, ops=ops)
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 119, in __init__
    self.mlp = MLP(config, device=device, dtype=dtype, ops=ops)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\llama.py", line 108, in __init__
    self.gate_proj = ops.Linear(config.hidden_size, config.intermediate_size, bias=False, device=device, dtype=dtype)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 106, in __init__
    torch.empty((out_features, in_features), **factory_kwargs)
torch.OutOfMemoryError: Allocation on device 

2024-12-26T01:25:31.414890 - Got an OOM, unloading all loaded models.
2024-12-26T01:25:31.503587 - Prompt executed in 40.40 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":91,"last_link_id":248,"nodes":[{"id":16,"type":"KSamplerSelect","pos":[302.9712829589844,765.45703125],"size":[315,58],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"SAMPLER","type":"SAMPLER","links":[19],"shape":3}],"properties":{"Node name for S&R":"KSamplerSelect"},"widgets_values":["euler"]},{"id":26,"type":"FluxGuidance","pos":[306.8459777832031,286.63427734375],"size":[317.4000244140625,58],"flags":{},"order":10,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":175}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[129],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[6],"color":"#233","bgcolor":"#355"},{"id":22,"type":"BasicGuider","pos":[314.36517333984375,191.50857543945312],"size":[222.3482666015625,46],"flags":{},"order":14,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":195,"slot_index":0},{"name":"conditioning","type":"CONDITIONING","link":129,"slot_index":1}],"outputs":[{"name":"GUIDER","type":"GUIDER","links":[30],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"BasicGuider"},"widgets_values":[]},{"id":8,"type":"VAEDecode","pos":[669.7714233398438,103.82862091064453],"size":[210,46],"flags":{},"order":16,"mode":2,"inputs":[{"name":"samples","type":"LATENT","link":181},{"name":"vae","type":"VAE","link":206}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":67,"type":"ModelSamplingSD3","pos":[317.5610046386719,85.38690185546875],"size":[210,58],"flags":{},"order":12,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":222}],"outputs":[{"name":"MODEL","type":"MODEL","links":[195],"slot_index":0}],"properties":{"Node name for S&R":"ModelSamplingSD3"},"widgets_values":[9]},{"id":73,"type":"VAEDecodeTiled","pos":[672.2857666015625,210.68569946289062],"size":[210,150],"flags":{},"order":17,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":210},{"name":"vae","type":"VAE","link":211}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[224],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecodeTiled"},"widgets_values":[192,64,64,8]},{"id":74,"type":"Note","pos":[664.1143188476562,363.7716064453125],"size":[272.8570251464844,117.82855224609375],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["848x480\n768x432\n"],"color":"#432","bgcolor":"#653"},{"id":10,"type":"VAELoader","pos":[-143.77676391601562,513.773681640625],"size":[350,60],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[206,211],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors"]},{"id":13,"type":"SamplerCustomAdvanced","pos":[680.1055297851562,538.4039916992188],"size":[272.3617858886719,124.53733825683594],"flags":{},"order":15,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":37,"slot_index":0},{"name":"guider","type":"GUIDER","link":30,"slot_index":1},{"name":"sampler","type":"SAMPLER","link":19,"slot_index":2},{"name":"sigmas","type":"SIGMAS","link":20,"slot_index":3},{"name":"latent_image","type":"LATENT","link":180,"slot_index":4}],"outputs":[{"name":"output","type":"LATENT","links":[181,210],"slot_index":0,"shape":3},{"name":"denoised_output","type":"LATENT","links":null,"shape":3}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":80,"type":"VHS_VideoCombine","pos":[1020.8887939453125,-31.14798355102539],"size":[669.0445556640625,334],"flags":{},"order":18,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":224},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"AnimateDiff","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"AnimateDiff_00009.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"AnimateDiff_00009.png","nodeId":"47","mediaType":"gifs","fullpath":"D:\\AI\\ComfyUI_windows_portable\\ComfyUI\\output\\AnimateDiff_00009.mp4"},"muted":true}}},{"id":77,"type":"Note","pos":[0.9221817851066589,625.2506103515625],"size":[210,76.12000274658203],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["Select a fp8 weight_dtype if you are running out of  memory."],"color":"#432","bgcolor":"#653"},{"id":82,"type":"LoraLoaderModelOnly","pos":[-310.6776123046875,-49.19013214111328],"size":[371.0444641113281,82],"flags":{},"order":9,"mode":4,"inputs":[{"name":"model","type":"MODEL","link":228}],"outputs":[{"name":"MODEL","type":"MODEL","links":[229],"slot_index":0}],"properties":{"Node name for S&R":"LoraLoaderModelOnly"},"widgets_values":["Hunyuan-Dreamyvibes-4 - 200img - full captions - 4000stpes - 15epocs - Dreamyvibes Style in all captions - PXR also included.safetensors",0.4]},{"id":45,"type":"EmptyHunyuanLatentVideo","pos":[307.0835876464844,419.4736022949219],"size":[315,130],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[180],"slot_index":0}],"properties":{"Node name for S&R":"EmptyHunyuanLatentVideo"},"widgets_values":[896,1152,1,1]},{"id":17,"type":"BasicScheduler","pos":[649.6002807617188,718.5712280273438],"size":[315,106],"flags":{},"order":13,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":223,"slot_index":0}],"outputs":[{"name":"SIGMAS","type":"SIGMAS","links":[20],"shape":3}],"properties":{"Node name for S&R":"BasicScheduler"},"widgets_values":["simple",20,1]},{"id":44,"type":"CLIPTextEncode","pos":[-143.21823120117188,121.41477966308594],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":8,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":205}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[175],"slot_index":0}],"title":"CLIP Text Encode (Positive Prompt)","properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["male focus, a handsome young topless caucasian guy standing at the construction site, buzzcut, "],"color":"#232","bgcolor":"#353"},{"id":25,"type":"RandomNoise","pos":[305.5144348144531,606.0571899414062],"size":[315,82],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[37],"shape":3}],"properties":{"Node name for S&R":"RandomNoise"},"widgets_values":[17500127041669,"fixed"],"color":"#2a363b","bgcolor":"#3f5159"},{"id":79,"type":"LoraLoaderModelOnly","pos":[133.752197265625,-83.94286346435547],"size":[687.0947265625,83.88575744628906],"flags":{},"order":11,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":229}],"outputs":[{"name":"MODEL","type":"MODEL","links":[222,223],"slot_index":0}],"properties":{"Node name for S&R":"LoraLoaderModelOnly"},"widgets_values":["hunyuan\\hunyuan_Step2000_WS40.safetensors",1]},{"id":11,"type":"DualCLIPLoader","pos":[-132.63111877441406,352.2718505859375],"size":[350,106],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[205],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l.safetensors","hunyuan\\llava_llama3_fp8_scaled.safetensors","hunyuan_video"]},{"id":12,"type":"UNETLoader","pos":[-791.26318359375,135.50929260253906],"size":[567.6742553710938,82],"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[228],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["hunyuan\\hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors","fp8_e4m3fn"],"color":"#223","bgcolor":"#335"}],"links":[[19,16,0,13,2,"SAMPLER"],[20,17,0,13,3,"SIGMAS"],[30,22,0,13,1,"GUIDER"],[37,25,0,13,0,"NOISE"],[129,26,0,22,1,"CONDITIONING"],[175,44,0,26,0,"CONDITIONING"],[180,45,0,13,4,"LATENT"],[181,13,0,8,0,"LATENT"],[195,67,0,22,0,"MODEL"],[205,11,0,44,0,"CLIP"],[206,10,0,8,1,"VAE"],[210,13,0,73,0,"LATENT"],[211,10,0,73,1,"VAE"],[222,79,0,67,0,"MODEL"],[223,79,0,17,0,"MODEL"],[224,73,0,80,0,"IMAGE"],[228,12,0,82,0,"MODEL"],[229,82,0,79,0,"MODEL"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7247295000000045,"offset":[894.1505395115322,220.39049973590755]},"groupNodes":{},"ue_links":[]},"version":0.4}


### Other

_No response_
@steve84wien steve84wien added the Potential Bug User is reporting a bug. This should be tested. label Dec 26, 2024
@steve84wien
Copy link
Author

steve84wien commented Dec 26, 2024

The problem has been solved. I accidental picked the wrong text model (FP16 (15GB) instead of FP8 (8GB). Somehow both files had the same filename model.safetensor. There should be a rule for naming models files more accurate on huggingface.

@bluenyx
Copy link

bluenyx commented Dec 27, 2024

same problem

image
image

ComfyUI Error Report

Error Details

  • Node ID: 76
  • Node Type: CLIPTextEncode
  • Exception Type: torch.OutOfMemoryError
  • Exception Message: Allocation on device

Stack Trace

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 146, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 207, in encode_from_tokens
    self.load_model()

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 240, in load_model
    model_management.load_model_gpu(self.patcher)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 555, in load_model_gpu
    return load_models_gpu([model])
           ^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 550, in load_models_gpu
    loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 366, in model_load
    self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 395, in model_use_more_vram
    return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 759, in partially_load
    raise e

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 756, in partially_load
    self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 78, in load
    super().load(*args, force_patch_weights=True, **kwargs)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 607, in load
    x[2].to(device_to)

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^

  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
    return t.to(
           ^^^^^

System Information

  • ComfyUI Version: v0.3.10-2-g160ca08
  • Arguments: ComfyUI\main.py --windows-standalone-build --bf16-vae --cuda-malloc --port 8189 --output-directory G:\AI-Output\ComfyUI-output --preview-method auto --disable-auto-launch --enable-cors-header --fast --cache-classic --user-directory C:\Users\bluen\Documents\AI\ComfyUI_User
  • OS: nt
  • Python Version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.5.1+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12884377600
    • VRAM Free: 11287793754
    • Torch VRAM Total: 503316480
    • Torch VRAM Free: 10358874

Logs

2024-12-27T13:37:05.557238 -  2024-12-27T13:37:05.557238 - 2024-12-27T13:37:05.557238 - 2024-12-27T13:37:05.557238 -  2024-12-27T13:37:05.557238 - 2024-12-27T13:37:05.558240 - 2024-12-27T13:37:05.558240 -  2024-12-27T13:37:05.558240 - 2024-12-27T13:37:06.078224 - 
## ComfyUI-Manager: EXECUTE => ['C:\\Users\\bluen\\Documents\\AI\\ComfyUI_Portable\\python_embeded\\python.exe', '-m', 'pip', 'install', 'diffusers >= 0.31.0']2024-12-27T13:37:06.078224 - 
2024-12-27T13:37:06.078224 - 
## Execute install/(de)activation script for 'C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper'2024-12-27T13:37:06.078224 - 
2024-12-27T13:37:07.056095 - 2024-12-27T13:37:07.056095 -  2024-12-27T13:37:07.057100 - 2024-12-27T13:37:07.062096 - 2024-12-27T13:37:07.062096 -  2024-12-27T13:37:07.062096 - 2024-12-27T13:37:07.062096 - 2024-12-27T13:37:07.062096 -  2024-12-27T13:37:07.062096 - 2024-12-27T13:37:07.062096 - 2024-12-27T13:37:07.062096 -  2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 -  2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 -  2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 -  2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 - 2024-12-27T13:37:07.063096 -  2024-12-27T13:37:07.064095 - 2024-12-27T13:37:07.064095 - 2024-12-27T13:37:07.064095 -  2024-12-27T13:37:07.064095 - 2024-12-27T13:37:07.070104 - 2024-12-27T13:37:07.070104 -  2024-12-27T13:37:07.070104 - 2024-12-27T13:37:07.070104 - 2024-12-27T13:37:07.070104 -  2024-12-27T13:37:07.070104 - 2024-12-27T13:37:07.071102 - 2024-12-27T13:37:07.071102 -  2024-12-27T13:37:07.071102 - 2024-12-27T13:37:07.071102 - 2024-12-27T13:37:07.071102 -  2024-12-27T13:37:07.071102 - 2024-12-27T13:37:07.071102 - 2024-12-27T13:37:07.071102 -  2024-12-27T13:37:07.071102 - 2024-12-27T13:37:07.078098 - 2024-12-27T13:37:07.078098 -  2024-12-27T13:37:07.078098 - 2024-12-27T13:37:07.084098 - 2024-12-27T13:37:07.084098 -  2024-12-27T13:37:07.084098 - 2024-12-27T13:37:07.085099 - 2024-12-27T13:37:07.085099 -  2024-12-27T13:37:07.085099 - 2024-12-27T13:37:07.085099 - 2024-12-27T13:37:07.085099 -  2024-12-27T13:37:07.085099 - 2024-12-27T13:37:07.086100 - 2024-12-27T13:37:07.086100 -  2024-12-27T13:37:07.086100 - 2024-12-27T13:37:07.105139 - 2024-12-27T13:37:07.105139 -  2024-12-27T13:37:07.105139 - 2024-12-27T13:37:07.666092 - 
## ComfyUI-Manager: EXECUTE => ['C:\\Users\\bluen\\Documents\\AI\\ComfyUI_Portable\\python_embeded\\python.exe', '-m', 'pip', 'install', 'transformers >= 4.47.0']2024-12-27T13:37:07.666092 - 
2024-12-27T13:37:07.666092 - 
## Execute install/(de)activation script for 'C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper'2024-12-27T13:37:07.666092 - 
2024-12-27T13:37:08.699223 - 2024-12-27T13:37:08.699223 -  2024-12-27T13:37:08.699223 - 2024-12-27T13:37:08.722221 - 2024-12-27T13:37:08.722221 -  2024-12-27T13:37:08.722221 - 2024-12-27T13:37:08.723223 - 2024-12-27T13:37:08.723223 -  2024-12-27T13:37:08.723223 - 2024-12-27T13:37:08.723223 - 2024-12-27T13:37:08.723223 -  2024-12-27T13:37:08.723223 - 2024-12-27T13:37:08.723223 - 2024-12-27T13:37:08.723223 -  2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.724228 -  2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.724228 -  2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.724228 -  2024-12-27T13:37:08.724228 - 2024-12-27T13:37:08.725228 - 2024-12-27T13:37:08.725228 -  2024-12-27T13:37:08.725228 - 2024-12-27T13:37:08.725228 - 2024-12-27T13:37:08.725228 -  2024-12-27T13:37:08.725228 - 2024-12-27T13:37:08.725228 - 2024-12-27T13:37:08.725228 -  2024-12-27T13:37:08.725228 - 2024-12-27T13:37:08.731228 - 2024-12-27T13:37:08.731228 -  2024-12-27T13:37:08.732227 - 2024-12-27T13:37:08.732227 - 2024-12-27T13:37:08.732227 -  2024-12-27T13:37:08.732227 - 2024-12-27T13:37:08.742232 - 2024-12-27T13:37:08.742232 -  2024-12-27T13:37:08.742232 - 2024-12-27T13:37:08.745232 - 2024-12-27T13:37:08.745232 -  2024-12-27T13:37:08.745232 - 2024-12-27T13:37:08.745232 - 2024-12-27T13:37:08.745232 -  2024-12-27T13:37:08.745232 - 2024-12-27T13:37:08.746234 - 2024-12-27T13:37:08.746234 -  2024-12-27T13:37:08.746234 - 2024-12-27T13:37:08.746234 - 2024-12-27T13:37:08.746234 -  2024-12-27T13:37:08.746234 - 2024-12-27T13:37:09.274039 - 
## ComfyUI-Manager: EXECUTE => ['C:\\Users\\bluen\\Documents\\AI\\ComfyUI_Portable\\python_embeded\\python.exe', '-m', 'pip', 'install', 'jax >= 0.4.28']2024-12-27T13:37:09.274039 - 
2024-12-27T13:37:09.274039 - 
## Execute install/(de)activation script for 'C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper'2024-12-27T13:37:09.274039 - 
2024-12-27T13:37:10.247013 - 2024-12-27T13:37:10.247013 -  2024-12-27T13:37:10.247013 - 2024-12-27T13:37:10.250013 - 2024-12-27T13:37:10.250013 -  2024-12-27T13:37:10.250013 - 2024-12-27T13:37:10.250013 - 2024-12-27T13:37:10.250013 -  2024-12-27T13:37:10.250013 - 2024-12-27T13:37:10.251013 - 2024-12-27T13:37:10.251013 -  2024-12-27T13:37:10.251013 - 2024-12-27T13:37:10.251013 - 2024-12-27T13:37:10.251013 -  2024-12-27T13:37:10.251013 - 2024-12-27T13:37:10.251013 - 2024-12-27T13:37:10.251013 -  2024-12-27T13:37:10.251013 - 2024-12-27T13:37:10.797435 - 
[ComfyUI-Manager] Startup script completed.2024-12-27T13:37:10.797435 - 
2024-12-27T13:37:10.797435 - #######################################################################
2024-12-27T13:37:10.797435 - 
2024-12-27T13:37:11.513848 - 
Prestartup times for custom nodes:
2024-12-27T13:37:11.514852 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\rgthree-comfy
2024-12-27T13:37:11.514852 -    9.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-27T13:37:11.514852 - 
2024-12-27T13:37:13.198323 - Total VRAM 12288 MB, total RAM 32581 MB
2024-12-27T13:37:13.199323 - pytorch version: 2.5.1+cu124
2024-12-27T13:37:14.665245 - xformers version: 0.0.28.post3
2024-12-27T13:37:14.666246 - Set vram state to: NORMAL_VRAM
2024-12-27T13:37:14.666246 - Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
2024-12-27T13:37:14.860263 - Using xformers attention
2024-12-27T13:37:16.014024 - [Prompt Server] web root: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\web
2024-12-27T13:37:17.059127 - [Crystools �[0;32mINFO�[0m] Crystools version: 1.21.0
2024-12-27T13:37:17.086039 - [Crystools �[0;32mINFO�[0m] CPU: 12th Gen Intel(R) Core(TM) i5-12400F - Arch: AMD64 - OS: Windows 11
2024-12-27T13:37:17.096036 - [Crystools �[0;32mINFO�[0m] Pynvml (Nvidia) initialized.
2024-12-27T13:37:17.096036 - [Crystools �[0;32mINFO�[0m] GPU/s:
2024-12-27T13:37:17.108094 - [Crystools �[0;32mINFO�[0m] 0) NVIDIA GeForce RTX 3060
2024-12-27T13:37:17.108094 - [Crystools �[0;32mINFO�[0m] NVIDIA Driver: 566.36
2024-12-27T13:37:18.497218 - ### Loading: ComfyUI-Impact-Pack (V8.1.5)2024-12-27T13:37:18.497218 - 
2024-12-27T13:37:18.594155 - [Impact Pack] Wildcards loading done.2024-12-27T13:37:18.594155 - 
2024-12-27T13:37:18.596157 - ### Loading: ComfyUI-Impact-Subpack (V1.1)2024-12-27T13:37:18.596157 - 
2024-12-27T13:37:18.676161 - [Impact Subpack] ultralytics_bbox: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\models\ultralytics\bbox
2024-12-27T13:37:18.676161 - [Impact Subpack] ultralytics_segm: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\models\ultralytics\segm
2024-12-27T13:37:18.778198 - ### Loading: ComfyUI-Manager (V2.55.5)2024-12-27T13:37:18.778198 - 
2024-12-27T13:37:19.039146 - ### ComfyUI Version: v0.3.10-2-g160ca081 | Released on '2024-12-26'2024-12-27T13:37:19.040145 - 
2024-12-27T13:37:19.690233 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-12-27T13:37:19.690233 - 
2024-12-27T13:37:19.722239 - ------------------------------------------2024-12-27T13:37:19.722239 - 
2024-12-27T13:37:19.722239 - �[34mComfyroll Studio v1.76 : �[92m 175 Nodes Loaded�[0m2024-12-27T13:37:19.722239 - 
2024-12-27T13:37:19.722239 - ------------------------------------------2024-12-27T13:37:19.722239 - 
2024-12-27T13:37:19.722239 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2024-12-27T13:37:19.722239 - 
2024-12-27T13:37:19.722239 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2024-12-27T13:37:19.722239 - 
2024-12-27T13:37:19.722239 - ------------------------------------------2024-12-27T13:37:19.722239 - 
2024-12-27T13:37:19.760918 - 
2024-12-27T13:37:19.760918 - �[92m[rgthree-comfy] Loaded 42 fantastic nodes. 🎉�[00m2024-12-27T13:37:19.760918 - 
2024-12-27T13:37:19.760918 - 
2024-12-27T13:37:19.831007 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-27T13:37:19.831007 - 
2024-12-27T13:37:19.861015 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-12-27T13:37:19.861015 - 
2024-12-27T13:37:20.023270 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-12-27T13:37:20.023270 - 
2024-12-27T13:37:20.130927 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-12-27T13:37:20.130927 - 
2024-12-27T13:37:20.547113 - �[34mWAS Node Suite: �[0mOpenCV Python FFMPEG support is enabled�[0m2024-12-27T13:37:20.547113 - 
2024-12-27T13:37:20.548116 - �[34mWAS Node Suite: �[0m`ffmpeg_bin_path` is set to: D:\Portable\ffmpeg_7\bin\ffmpeg.exe�[0m2024-12-27T13:37:20.548116 - 
2024-12-27T13:37:21.146993 - �[34mWAS Node Suite: �[0mFinished.�[0m �[32mLoaded�[0m �[0m220�[0m �[32mnodes successfully.�[0m2024-12-27T13:37:21.146993 - 
2024-12-27T13:37:21.146993 - 
	�[3m�[93m"Every artist was first an amateur."�[0m�[3m - Ralph Waldo Emerson�[0m
2024-12-27T13:37:21.146993 - 
2024-12-27T13:37:21.149995 - 
Import times for custom nodes:
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\websocket_image_save.py
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI_bnb_nf4_fp4_Loaders
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\cg-use-everywhere
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-mem-safe-wrapper
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-MochiWrapper
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-GGUF
2024-12-27T13:37:21.149995 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\rgthree-comfy
2024-12-27T13:37:21.150993 -    0.0 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2024-12-27T13:37:21.150993 -    0.1 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Impact-Subpack
2024-12-27T13:37:21.150993 -    0.1 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Long-CLIP
2024-12-27T13:37:21.150993 -    0.1 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2024-12-27T13:37:21.150993 -    0.4 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Crystools
2024-12-27T13:37:21.150993 -    0.4 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
2024-12-27T13:37:21.150993 -    0.5 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Florence-2
2024-12-27T13:37:21.150993 -    0.5 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Manager
2024-12-27T13:37:21.150993 -    0.8 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper
2024-12-27T13:37:21.150993 -    1.4 seconds: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\was-node-suite-comfyui
2024-12-27T13:37:21.150993 - 
2024-12-27T13:37:21.160993 - Starting server

2024-12-27T13:37:21.161995 - To see the GUI go to: http://127.0.0.1:8189
2024-12-27T13:37:27.561189 - FETCH DATA from: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-27T13:37:27.562191 - 2024-12-27T13:37:27.569202 -  [DONE]2024-12-27T13:37:27.569202 - 
2024-12-27T13:53:09.075944 - FETCH DATA from: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-27T13:53:09.076950 - 2024-12-27T13:53:09.082950 -  [DONE]2024-12-27T13:53:09.082950 - 
2024-12-27T13:57:37.149204 - FETCH DATA from: C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json2024-12-27T13:57:37.149204 - 2024-12-27T13:57:37.156222 -  [DONE]2024-12-27T13:57:37.157219 - 
2024-12-27T14:03:54.847758 - got prompt
2024-12-27T14:03:57.702845 - C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py:65: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:212.)
  torch_tensor = torch.from_numpy(tensor.data) # mmap
2024-12-27T14:03:57.710843 - 
ggml_sd_loader:2024-12-27T14:03:57.710843 - 
2024-12-27T14:03:57.710843 -  0                             5132024-12-27T14:03:57.710843 - 
2024-12-27T14:03:57.710843 -  12                            2802024-12-27T14:03:57.710843 - 
2024-12-27T14:03:57.711845 -  13                             402024-12-27T14:03:57.711845 - 
2024-12-27T14:03:57.711845 -  30                             232024-12-27T14:03:57.711845 - 
2024-12-27T14:03:57.739859 - model weight dtype torch.bfloat16, manual cast: None
2024-12-27T14:03:57.747860 - model_type FLOW
2024-12-27T14:04:01.609964 - 
ggml_sd_loader:2024-12-27T14:04:01.610964 - 
2024-12-27T14:04:01.610964 -  13                            1442024-12-27T14:04:01.610964 - 
2024-12-27T14:04:01.610964 -  0                              502024-12-27T14:04:01.610964 - 
2024-12-27T14:04:01.610964 -  14                             252024-12-27T14:04:01.610964 - 
2024-12-27T14:04:02.359088 - CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2024-12-27T14:04:14.086880 - clip missing: ['model.embed_tokens.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.0.self_attn.o_proj.weight', 'model.layers.0.mlp.gate_proj.weight', 'model.layers.0.mlp.up_proj.weight', 'model.layers.0.mlp.down_proj.weight', 'model.layers.0.input_layernorm.weight', 'model.layers.0.post_attention_layernorm.weight', 'model.layers.1.self_attn.q_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.1.self_attn.o_proj.weight', 'model.layers.1.mlp.gate_proj.weight', 'model.layers.1.mlp.up_proj.weight', 'model.layers.1.mlp.down_proj.weight', 'model.layers.1.input_layernorm.weight', 'model.layers.1.post_attention_layernorm.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.2.self_attn.o_proj.weight', 'model.layers.2.mlp.gate_proj.weight', 'model.layers.2.mlp.up_proj.weight', 'model.layers.2.mlp.down_proj.weight', 'model.layers.2.input_layernorm.weight', 'model.layers.2.post_attention_layernorm.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.3.self_attn.o_proj.weight', 'model.layers.3.mlp.gate_proj.weight', 'model.layers.3.mlp.up_proj.weight', 'model.layers.3.mlp.down_proj.weight', 'model.layers.3.input_layernorm.weight', 'model.layers.3.post_attention_layernorm.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.4.self_attn.o_proj.weight', 'model.layers.4.mlp.gate_proj.weight', 'model.layers.4.mlp.up_proj.weight', 'model.layers.4.mlp.down_proj.weight', 'model.layers.4.input_layernorm.weight', 'model.layers.4.post_attention_layernorm.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.5.self_attn.o_proj.weight', 'model.layers.5.mlp.gate_proj.weight', 'model.layers.5.mlp.up_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.5.input_layernorm.weight', 'model.layers.5.post_attention_layernorm.weight', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.6.self_attn.o_proj.weight', 'model.layers.6.mlp.gate_proj.weight', 'model.layers.6.mlp.up_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.6.input_layernorm.weight', 'model.layers.6.post_attention_layernorm.weight', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.7.self_attn.o_proj.weight', 'model.layers.7.mlp.gate_proj.weight', 'model.layers.7.mlp.up_proj.weight', 'model.layers.7.mlp.down_proj.weight', 'model.layers.7.input_layernorm.weight', 'model.layers.7.post_attention_layernorm.weight', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.8.self_attn.o_proj.weight', 'model.layers.8.mlp.gate_proj.weight', 'model.layers.8.mlp.up_proj.weight', 'model.layers.8.mlp.down_proj.weight', 'model.layers.8.input_layernorm.weight', 'model.layers.8.post_attention_layernorm.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.9.self_attn.o_proj.weight', 'model.layers.9.mlp.gate_proj.weight', 'model.layers.9.mlp.up_proj.weight', 'model.layers.9.mlp.down_proj.weight', 'model.layers.9.input_layernorm.weight', 'model.layers.9.post_attention_layernorm.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.10.self_attn.o_proj.weight', 'model.layers.10.mlp.gate_proj.weight', 'model.layers.10.mlp.up_proj.weight', 'model.layers.10.mlp.down_proj.weight', 'model.layers.10.input_layernorm.weight', 'model.layers.10.post_attention_layernorm.weight', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.11.self_attn.o_proj.weight', 'model.layers.11.mlp.gate_proj.weight', 'model.layers.11.mlp.up_proj.weight', 'model.layers.11.mlp.down_proj.weight', 'model.layers.11.input_layernorm.weight', 'model.layers.11.post_attention_layernorm.weight', 'model.layers.12.self_attn.q_proj.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.12.self_attn.o_proj.weight', 'model.layers.12.mlp.gate_proj.weight', 'model.layers.12.mlp.up_proj.weight', 'model.layers.12.mlp.down_proj.weight', 'model.layers.12.input_layernorm.weight', 'model.layers.12.post_attention_layernorm.weight', 'model.layers.13.self_attn.q_proj.weight', 'model.layers.13.self_attn.k_proj.weight', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.13.self_attn.o_proj.weight', 'model.layers.13.mlp.gate_proj.weight', 'model.layers.13.mlp.up_proj.weight', 'model.layers.13.mlp.down_proj.weight', 'model.layers.13.input_layernorm.weight', 'model.layers.13.post_attention_layernorm.weight', 'model.layers.14.self_attn.q_proj.weight', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.14.self_attn.o_proj.weight', 'model.layers.14.mlp.gate_proj.weight', 'model.layers.14.mlp.up_proj.weight', 'model.layers.14.mlp.down_proj.weight', 'model.layers.14.input_layernorm.weight', 'model.layers.14.post_attention_layernorm.weight', 'model.layers.15.self_attn.q_proj.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.15.self_attn.o_proj.weight', 'model.layers.15.mlp.gate_proj.weight', 'model.layers.15.mlp.up_proj.weight', 'model.layers.15.mlp.down_proj.weight', 'model.layers.15.input_layernorm.weight', 'model.layers.15.post_attention_layernorm.weight', 'model.layers.16.self_attn.q_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.16.self_attn.o_proj.weight', 'model.layers.16.mlp.gate_proj.weight', 'model.layers.16.mlp.up_proj.weight', 'model.layers.16.mlp.down_proj.weight', 'model.layers.16.input_layernorm.weight', 'model.layers.16.post_attention_layernorm.weight', 'model.layers.17.self_attn.q_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.17.self_attn.o_proj.weight', 'model.layers.17.mlp.gate_proj.weight', 'model.layers.17.mlp.up_proj.weight', 'model.layers.17.mlp.down_proj.weight', 'model.layers.17.input_layernorm.weight', 'model.layers.17.post_attention_layernorm.weight', 'model.layers.18.self_attn.q_proj.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.18.self_attn.o_proj.weight', 'model.layers.18.mlp.gate_proj.weight', 'model.layers.18.mlp.up_proj.weight', 'model.layers.18.mlp.down_proj.weight', 'model.layers.18.input_layernorm.weight', 'model.layers.18.post_attention_layernorm.weight', 'model.layers.19.self_attn.q_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.19.self_attn.o_proj.weight', 'model.layers.19.mlp.gate_proj.weight', 'model.layers.19.mlp.up_proj.weight', 'model.layers.19.mlp.down_proj.weight', 'model.layers.19.input_layernorm.weight', 'model.layers.19.post_attention_layernorm.weight', 'model.layers.20.self_attn.q_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.20.self_attn.o_proj.weight', 'model.layers.20.mlp.gate_proj.weight', 'model.layers.20.mlp.up_proj.weight', 'model.layers.20.mlp.down_proj.weight', 'model.layers.20.input_layernorm.weight', 'model.layers.20.post_attention_layernorm.weight', 'model.layers.21.self_attn.q_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.21.self_attn.o_proj.weight', 'model.layers.21.mlp.gate_proj.weight', 'model.layers.21.mlp.up_proj.weight', 'model.layers.21.mlp.down_proj.weight', 'model.layers.21.input_layernorm.weight', 'model.layers.21.post_attention_layernorm.weight', 'model.layers.22.self_attn.q_proj.weight', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.22.self_attn.o_proj.weight', 'model.layers.22.mlp.gate_proj.weight', 'model.layers.22.mlp.up_proj.weight', 'model.layers.22.mlp.down_proj.weight', 'model.layers.22.input_layernorm.weight', 'model.layers.22.post_attention_layernorm.weight', 'model.layers.23.self_attn.q_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.23.self_attn.o_proj.weight', 'model.layers.23.mlp.gate_proj.weight', 'model.layers.23.mlp.up_proj.weight', 'model.layers.23.mlp.down_proj.weight', 'model.layers.23.input_layernorm.weight', 'model.layers.23.post_attention_layernorm.weight', 'model.layers.24.self_attn.q_proj.weight', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.24.self_attn.o_proj.weight', 'model.layers.24.mlp.gate_proj.weight', 'model.layers.24.mlp.up_proj.weight', 'model.layers.24.mlp.down_proj.weight', 'model.layers.24.input_layernorm.weight', 'model.layers.24.post_attention_layernorm.weight', 'model.layers.25.self_attn.q_proj.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.25.self_attn.o_proj.weight', 'model.layers.25.mlp.gate_proj.weight', 'model.layers.25.mlp.up_proj.weight', 'model.layers.25.mlp.down_proj.weight', 'model.layers.25.input_layernorm.weight', 'model.layers.25.post_attention_layernorm.weight', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.26.self_attn.o_proj.weight', 'model.layers.26.mlp.gate_proj.weight', 'model.layers.26.mlp.up_proj.weight', 'model.layers.26.mlp.down_proj.weight', 'model.layers.26.input_layernorm.weight', 'model.layers.26.post_attention_layernorm.weight', 'model.layers.27.self_attn.q_proj.weight', 'model.layers.27.self_attn.k_proj.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.27.self_attn.o_proj.weight', 'model.layers.27.mlp.gate_proj.weight', 'model.layers.27.mlp.up_proj.weight', 'model.layers.27.mlp.down_proj.weight', 'model.layers.27.input_layernorm.weight', 'model.layers.27.post_attention_layernorm.weight', 'model.layers.28.self_attn.q_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.28.self_attn.o_proj.weight', 'model.layers.28.mlp.gate_proj.weight', 'model.layers.28.mlp.up_proj.weight', 'model.layers.28.mlp.down_proj.weight', 'model.layers.28.input_layernorm.weight', 'model.layers.28.post_attention_layernorm.weight', 'model.layers.29.self_attn.q_proj.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.29.self_attn.o_proj.weight', 'model.layers.29.mlp.gate_proj.weight', 'model.layers.29.mlp.up_proj.weight', 'model.layers.29.mlp.down_proj.weight', 'model.layers.29.input_layernorm.weight', 'model.layers.29.post_attention_layernorm.weight', 'model.layers.30.self_attn.q_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.30.self_attn.o_proj.weight', 'model.layers.30.mlp.gate_proj.weight', 'model.layers.30.mlp.up_proj.weight', 'model.layers.30.mlp.down_proj.weight', 'model.layers.30.input_layernorm.weight', 'model.layers.30.post_attention_layernorm.weight', 'model.layers.31.self_attn.q_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.31.self_attn.o_proj.weight', 'model.layers.31.mlp.gate_proj.weight', 'model.layers.31.mlp.up_proj.weight', 'model.layers.31.mlp.down_proj.weight', 'model.layers.31.input_layernorm.weight', 'model.layers.31.post_attention_layernorm.weight', 'model.norm.weight']
2024-12-27T14:04:14.326900 - Requested to load HunyuanVideoClipModel_
2024-12-27T14:04:43.624110 - !!! Exception during processing !!! Allocation on device 
2024-12-27T14:04:43.976028 - Traceback (most recent call last):
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 146, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 207, in encode_from_tokens
    self.load_model()
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 240, in load_model
    model_management.load_model_gpu(self.patcher)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 555, in load_model_gpu
    return load_models_gpu([model])
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 550, in load_models_gpu
    loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 366, in model_load
    self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 395, in model_use_more_vram
    return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 759, in partially_load
    raise e
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 756, in partially_load
    self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 78, in load
    super().load(*args, force_patch_weights=True, **kwargs)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 607, in load
    x[2].to(device_to)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
    return t.to(
           ^^^^^
torch.OutOfMemoryError: Allocation on device 

2024-12-27T14:04:43.982028 - Got an OOM, unloading all loaded models.
2024-12-27T14:04:46.420871 - Prompt executed in 51.54 seconds
2024-12-27T14:05:22.893814 - got prompt
2024-12-27T14:05:24.875943 - CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2024-12-27T14:05:34.174033 - clip missing: ['model.embed_tokens.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.0.self_attn.o_proj.weight', 'model.layers.0.mlp.gate_proj.weight', 'model.layers.0.mlp.up_proj.weight', 'model.layers.0.mlp.down_proj.weight', 'model.layers.0.input_layernorm.weight', 'model.layers.0.post_attention_layernorm.weight', 'model.layers.1.self_attn.q_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.1.self_attn.o_proj.weight', 'model.layers.1.mlp.gate_proj.weight', 'model.layers.1.mlp.up_proj.weight', 'model.layers.1.mlp.down_proj.weight', 'model.layers.1.input_layernorm.weight', 'model.layers.1.post_attention_layernorm.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.2.self_attn.o_proj.weight', 'model.layers.2.mlp.gate_proj.weight', 'model.layers.2.mlp.up_proj.weight', 'model.layers.2.mlp.down_proj.weight', 'model.layers.2.input_layernorm.weight', 'model.layers.2.post_attention_layernorm.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.3.self_attn.o_proj.weight', 'model.layers.3.mlp.gate_proj.weight', 'model.layers.3.mlp.up_proj.weight', 'model.layers.3.mlp.down_proj.weight', 'model.layers.3.input_layernorm.weight', 'model.layers.3.post_attention_layernorm.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.4.self_attn.o_proj.weight', 'model.layers.4.mlp.gate_proj.weight', 'model.layers.4.mlp.up_proj.weight', 'model.layers.4.mlp.down_proj.weight', 'model.layers.4.input_layernorm.weight', 'model.layers.4.post_attention_layernorm.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.5.self_attn.o_proj.weight', 'model.layers.5.mlp.gate_proj.weight', 'model.layers.5.mlp.up_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.5.input_layernorm.weight', 'model.layers.5.post_attention_layernorm.weight', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.6.self_attn.o_proj.weight', 'model.layers.6.mlp.gate_proj.weight', 'model.layers.6.mlp.up_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.6.input_layernorm.weight', 'model.layers.6.post_attention_layernorm.weight', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.7.self_attn.o_proj.weight', 'model.layers.7.mlp.gate_proj.weight', 'model.layers.7.mlp.up_proj.weight', 'model.layers.7.mlp.down_proj.weight', 'model.layers.7.input_layernorm.weight', 'model.layers.7.post_attention_layernorm.weight', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.8.self_attn.o_proj.weight', 'model.layers.8.mlp.gate_proj.weight', 'model.layers.8.mlp.up_proj.weight', 'model.layers.8.mlp.down_proj.weight', 'model.layers.8.input_layernorm.weight', 'model.layers.8.post_attention_layernorm.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.9.self_attn.o_proj.weight', 'model.layers.9.mlp.gate_proj.weight', 'model.layers.9.mlp.up_proj.weight', 'model.layers.9.mlp.down_proj.weight', 'model.layers.9.input_layernorm.weight', 'model.layers.9.post_attention_layernorm.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.10.self_attn.o_proj.weight', 'model.layers.10.mlp.gate_proj.weight', 'model.layers.10.mlp.up_proj.weight', 'model.layers.10.mlp.down_proj.weight', 'model.layers.10.input_layernorm.weight', 'model.layers.10.post_attention_layernorm.weight', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.11.self_attn.o_proj.weight', 'model.layers.11.mlp.gate_proj.weight', 'model.layers.11.mlp.up_proj.weight', 'model.layers.11.mlp.down_proj.weight', 'model.layers.11.input_layernorm.weight', 'model.layers.11.post_attention_layernorm.weight', 'model.layers.12.self_attn.q_proj.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.12.self_attn.o_proj.weight', 'model.layers.12.mlp.gate_proj.weight', 'model.layers.12.mlp.up_proj.weight', 'model.layers.12.mlp.down_proj.weight', 'model.layers.12.input_layernorm.weight', 'model.layers.12.post_attention_layernorm.weight', 'model.layers.13.self_attn.q_proj.weight', 'model.layers.13.self_attn.k_proj.weight', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.13.self_attn.o_proj.weight', 'model.layers.13.mlp.gate_proj.weight', 'model.layers.13.mlp.up_proj.weight', 'model.layers.13.mlp.down_proj.weight', 'model.layers.13.input_layernorm.weight', 'model.layers.13.post_attention_layernorm.weight', 'model.layers.14.self_attn.q_proj.weight', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.14.self_attn.o_proj.weight', 'model.layers.14.mlp.gate_proj.weight', 'model.layers.14.mlp.up_proj.weight', 'model.layers.14.mlp.down_proj.weight', 'model.layers.14.input_layernorm.weight', 'model.layers.14.post_attention_layernorm.weight', 'model.layers.15.self_attn.q_proj.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.15.self_attn.o_proj.weight', 'model.layers.15.mlp.gate_proj.weight', 'model.layers.15.mlp.up_proj.weight', 'model.layers.15.mlp.down_proj.weight', 'model.layers.15.input_layernorm.weight', 'model.layers.15.post_attention_layernorm.weight', 'model.layers.16.self_attn.q_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.16.self_attn.o_proj.weight', 'model.layers.16.mlp.gate_proj.weight', 'model.layers.16.mlp.up_proj.weight', 'model.layers.16.mlp.down_proj.weight', 'model.layers.16.input_layernorm.weight', 'model.layers.16.post_attention_layernorm.weight', 'model.layers.17.self_attn.q_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.17.self_attn.o_proj.weight', 'model.layers.17.mlp.gate_proj.weight', 'model.layers.17.mlp.up_proj.weight', 'model.layers.17.mlp.down_proj.weight', 'model.layers.17.input_layernorm.weight', 'model.layers.17.post_attention_layernorm.weight', 'model.layers.18.self_attn.q_proj.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.18.self_attn.o_proj.weight', 'model.layers.18.mlp.gate_proj.weight', 'model.layers.18.mlp.up_proj.weight', 'model.layers.18.mlp.down_proj.weight', 'model.layers.18.input_layernorm.weight', 'model.layers.18.post_attention_layernorm.weight', 'model.layers.19.self_attn.q_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.19.self_attn.o_proj.weight', 'model.layers.19.mlp.gate_proj.weight', 'model.layers.19.mlp.up_proj.weight', 'model.layers.19.mlp.down_proj.weight', 'model.layers.19.input_layernorm.weight', 'model.layers.19.post_attention_layernorm.weight', 'model.layers.20.self_attn.q_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.20.self_attn.o_proj.weight', 'model.layers.20.mlp.gate_proj.weight', 'model.layers.20.mlp.up_proj.weight', 'model.layers.20.mlp.down_proj.weight', 'model.layers.20.input_layernorm.weight', 'model.layers.20.post_attention_layernorm.weight', 'model.layers.21.self_attn.q_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.21.self_attn.o_proj.weight', 'model.layers.21.mlp.gate_proj.weight', 'model.layers.21.mlp.up_proj.weight', 'model.layers.21.mlp.down_proj.weight', 'model.layers.21.input_layernorm.weight', 'model.layers.21.post_attention_layernorm.weight', 'model.layers.22.self_attn.q_proj.weight', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.22.self_attn.o_proj.weight', 'model.layers.22.mlp.gate_proj.weight', 'model.layers.22.mlp.up_proj.weight', 'model.layers.22.mlp.down_proj.weight', 'model.layers.22.input_layernorm.weight', 'model.layers.22.post_attention_layernorm.weight', 'model.layers.23.self_attn.q_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.23.self_attn.o_proj.weight', 'model.layers.23.mlp.gate_proj.weight', 'model.layers.23.mlp.up_proj.weight', 'model.layers.23.mlp.down_proj.weight', 'model.layers.23.input_layernorm.weight', 'model.layers.23.post_attention_layernorm.weight', 'model.layers.24.self_attn.q_proj.weight', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.24.self_attn.o_proj.weight', 'model.layers.24.mlp.gate_proj.weight', 'model.layers.24.mlp.up_proj.weight', 'model.layers.24.mlp.down_proj.weight', 'model.layers.24.input_layernorm.weight', 'model.layers.24.post_attention_layernorm.weight', 'model.layers.25.self_attn.q_proj.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.25.self_attn.o_proj.weight', 'model.layers.25.mlp.gate_proj.weight', 'model.layers.25.mlp.up_proj.weight', 'model.layers.25.mlp.down_proj.weight', 'model.layers.25.input_layernorm.weight', 'model.layers.25.post_attention_layernorm.weight', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.26.self_attn.o_proj.weight', 'model.layers.26.mlp.gate_proj.weight', 'model.layers.26.mlp.up_proj.weight', 'model.layers.26.mlp.down_proj.weight', 'model.layers.26.input_layernorm.weight', 'model.layers.26.post_attention_layernorm.weight', 'model.layers.27.self_attn.q_proj.weight', 'model.layers.27.self_attn.k_proj.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.27.self_attn.o_proj.weight', 'model.layers.27.mlp.gate_proj.weight', 'model.layers.27.mlp.up_proj.weight', 'model.layers.27.mlp.down_proj.weight', 'model.layers.27.input_layernorm.weight', 'model.layers.27.post_attention_layernorm.weight', 'model.layers.28.self_attn.q_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.28.self_attn.o_proj.weight', 'model.layers.28.mlp.gate_proj.weight', 'model.layers.28.mlp.up_proj.weight', 'model.layers.28.mlp.down_proj.weight', 'model.layers.28.input_layernorm.weight', 'model.layers.28.post_attention_layernorm.weight', 'model.layers.29.self_attn.q_proj.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.29.self_attn.o_proj.weight', 'model.layers.29.mlp.gate_proj.weight', 'model.layers.29.mlp.up_proj.weight', 'model.layers.29.mlp.down_proj.weight', 'model.layers.29.input_layernorm.weight', 'model.layers.29.post_attention_layernorm.weight', 'model.layers.30.self_attn.q_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.30.self_attn.o_proj.weight', 'model.layers.30.mlp.gate_proj.weight', 'model.layers.30.mlp.up_proj.weight', 'model.layers.30.mlp.down_proj.weight', 'model.layers.30.input_layernorm.weight', 'model.layers.30.post_attention_layernorm.weight', 'model.layers.31.self_attn.q_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.31.self_attn.o_proj.weight', 'model.layers.31.mlp.gate_proj.weight', 'model.layers.31.mlp.up_proj.weight', 'model.layers.31.mlp.down_proj.weight', 'model.layers.31.input_layernorm.weight', 'model.layers.31.post_attention_layernorm.weight', 'model.norm.weight']
2024-12-27T14:05:34.672925 - Requested to load HunyuanVideoClipModel_
2024-12-27T14:06:02.224913 - !!! Exception during processing !!! Allocation on device 
2024-12-27T14:06:02.289974 - Traceback (most recent call last):
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 146, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 207, in encode_from_tokens
    self.load_model()
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 240, in load_model
    model_management.load_model_gpu(self.patcher)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 555, in load_model_gpu
    return load_models_gpu([model])
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 550, in load_models_gpu
    loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 366, in model_load
    self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 395, in model_use_more_vram
    return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 759, in partially_load
    raise e
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 756, in partially_load
    self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 78, in load
    super().load(*args, force_patch_weights=True, **kwargs)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 607, in load
    x[2].to(device_to)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
    return t.to(
           ^^^^^
torch.OutOfMemoryError: Allocation on device 

2024-12-27T14:06:02.295981 - Got an OOM, unloading all loaded models.
2024-12-27T14:06:04.778110 - Prompt executed in 41.84 seconds
2024-12-27T14:06:16.534152 - got prompt
2024-12-27T14:06:18.543042 - CLIP model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2024-12-27T14:06:26.768812 - clip missing: ['model.embed_tokens.weight', 'model.layers.0.self_attn.q_proj.weight', 'model.layers.0.self_attn.k_proj.weight', 'model.layers.0.self_attn.v_proj.weight', 'model.layers.0.self_attn.o_proj.weight', 'model.layers.0.mlp.gate_proj.weight', 'model.layers.0.mlp.up_proj.weight', 'model.layers.0.mlp.down_proj.weight', 'model.layers.0.input_layernorm.weight', 'model.layers.0.post_attention_layernorm.weight', 'model.layers.1.self_attn.q_proj.weight', 'model.layers.1.self_attn.k_proj.weight', 'model.layers.1.self_attn.v_proj.weight', 'model.layers.1.self_attn.o_proj.weight', 'model.layers.1.mlp.gate_proj.weight', 'model.layers.1.mlp.up_proj.weight', 'model.layers.1.mlp.down_proj.weight', 'model.layers.1.input_layernorm.weight', 'model.layers.1.post_attention_layernorm.weight', 'model.layers.2.self_attn.q_proj.weight', 'model.layers.2.self_attn.k_proj.weight', 'model.layers.2.self_attn.v_proj.weight', 'model.layers.2.self_attn.o_proj.weight', 'model.layers.2.mlp.gate_proj.weight', 'model.layers.2.mlp.up_proj.weight', 'model.layers.2.mlp.down_proj.weight', 'model.layers.2.input_layernorm.weight', 'model.layers.2.post_attention_layernorm.weight', 'model.layers.3.self_attn.q_proj.weight', 'model.layers.3.self_attn.k_proj.weight', 'model.layers.3.self_attn.v_proj.weight', 'model.layers.3.self_attn.o_proj.weight', 'model.layers.3.mlp.gate_proj.weight', 'model.layers.3.mlp.up_proj.weight', 'model.layers.3.mlp.down_proj.weight', 'model.layers.3.input_layernorm.weight', 'model.layers.3.post_attention_layernorm.weight', 'model.layers.4.self_attn.q_proj.weight', 'model.layers.4.self_attn.k_proj.weight', 'model.layers.4.self_attn.v_proj.weight', 'model.layers.4.self_attn.o_proj.weight', 'model.layers.4.mlp.gate_proj.weight', 'model.layers.4.mlp.up_proj.weight', 'model.layers.4.mlp.down_proj.weight', 'model.layers.4.input_layernorm.weight', 'model.layers.4.post_attention_layernorm.weight', 'model.layers.5.self_attn.q_proj.weight', 'model.layers.5.self_attn.k_proj.weight', 'model.layers.5.self_attn.v_proj.weight', 'model.layers.5.self_attn.o_proj.weight', 'model.layers.5.mlp.gate_proj.weight', 'model.layers.5.mlp.up_proj.weight', 'model.layers.5.mlp.down_proj.weight', 'model.layers.5.input_layernorm.weight', 'model.layers.5.post_attention_layernorm.weight', 'model.layers.6.self_attn.q_proj.weight', 'model.layers.6.self_attn.k_proj.weight', 'model.layers.6.self_attn.v_proj.weight', 'model.layers.6.self_attn.o_proj.weight', 'model.layers.6.mlp.gate_proj.weight', 'model.layers.6.mlp.up_proj.weight', 'model.layers.6.mlp.down_proj.weight', 'model.layers.6.input_layernorm.weight', 'model.layers.6.post_attention_layernorm.weight', 'model.layers.7.self_attn.q_proj.weight', 'model.layers.7.self_attn.k_proj.weight', 'model.layers.7.self_attn.v_proj.weight', 'model.layers.7.self_attn.o_proj.weight', 'model.layers.7.mlp.gate_proj.weight', 'model.layers.7.mlp.up_proj.weight', 'model.layers.7.mlp.down_proj.weight', 'model.layers.7.input_layernorm.weight', 'model.layers.7.post_attention_layernorm.weight', 'model.layers.8.self_attn.q_proj.weight', 'model.layers.8.self_attn.k_proj.weight', 'model.layers.8.self_attn.v_proj.weight', 'model.layers.8.self_attn.o_proj.weight', 'model.layers.8.mlp.gate_proj.weight', 'model.layers.8.mlp.up_proj.weight', 'model.layers.8.mlp.down_proj.weight', 'model.layers.8.input_layernorm.weight', 'model.layers.8.post_attention_layernorm.weight', 'model.layers.9.self_attn.q_proj.weight', 'model.layers.9.self_attn.k_proj.weight', 'model.layers.9.self_attn.v_proj.weight', 'model.layers.9.self_attn.o_proj.weight', 'model.layers.9.mlp.gate_proj.weight', 'model.layers.9.mlp.up_proj.weight', 'model.layers.9.mlp.down_proj.weight', 'model.layers.9.input_layernorm.weight', 'model.layers.9.post_attention_layernorm.weight', 'model.layers.10.self_attn.q_proj.weight', 'model.layers.10.self_attn.k_proj.weight', 'model.layers.10.self_attn.v_proj.weight', 'model.layers.10.self_attn.o_proj.weight', 'model.layers.10.mlp.gate_proj.weight', 'model.layers.10.mlp.up_proj.weight', 'model.layers.10.mlp.down_proj.weight', 'model.layers.10.input_layernorm.weight', 'model.layers.10.post_attention_layernorm.weight', 'model.layers.11.self_attn.q_proj.weight', 'model.layers.11.self_attn.k_proj.weight', 'model.layers.11.self_attn.v_proj.weight', 'model.layers.11.self_attn.o_proj.weight', 'model.layers.11.mlp.gate_proj.weight', 'model.layers.11.mlp.up_proj.weight', 'model.layers.11.mlp.down_proj.weight', 'model.layers.11.input_layernorm.weight', 'model.layers.11.post_attention_layernorm.weight', 'model.layers.12.self_attn.q_proj.weight', 'model.layers.12.self_attn.k_proj.weight', 'model.layers.12.self_attn.v_proj.weight', 'model.layers.12.self_attn.o_proj.weight', 'model.layers.12.mlp.gate_proj.weight', 'model.layers.12.mlp.up_proj.weight', 'model.layers.12.mlp.down_proj.weight', 'model.layers.12.input_layernorm.weight', 'model.layers.12.post_attention_layernorm.weight', 'model.layers.13.self_attn.q_proj.weight', 'model.layers.13.self_attn.k_proj.weight', 'model.layers.13.self_attn.v_proj.weight', 'model.layers.13.self_attn.o_proj.weight', 'model.layers.13.mlp.gate_proj.weight', 'model.layers.13.mlp.up_proj.weight', 'model.layers.13.mlp.down_proj.weight', 'model.layers.13.input_layernorm.weight', 'model.layers.13.post_attention_layernorm.weight', 'model.layers.14.self_attn.q_proj.weight', 'model.layers.14.self_attn.k_proj.weight', 'model.layers.14.self_attn.v_proj.weight', 'model.layers.14.self_attn.o_proj.weight', 'model.layers.14.mlp.gate_proj.weight', 'model.layers.14.mlp.up_proj.weight', 'model.layers.14.mlp.down_proj.weight', 'model.layers.14.input_layernorm.weight', 'model.layers.14.post_attention_layernorm.weight', 'model.layers.15.self_attn.q_proj.weight', 'model.layers.15.self_attn.k_proj.weight', 'model.layers.15.self_attn.v_proj.weight', 'model.layers.15.self_attn.o_proj.weight', 'model.layers.15.mlp.gate_proj.weight', 'model.layers.15.mlp.up_proj.weight', 'model.layers.15.mlp.down_proj.weight', 'model.layers.15.input_layernorm.weight', 'model.layers.15.post_attention_layernorm.weight', 'model.layers.16.self_attn.q_proj.weight', 'model.layers.16.self_attn.k_proj.weight', 'model.layers.16.self_attn.v_proj.weight', 'model.layers.16.self_attn.o_proj.weight', 'model.layers.16.mlp.gate_proj.weight', 'model.layers.16.mlp.up_proj.weight', 'model.layers.16.mlp.down_proj.weight', 'model.layers.16.input_layernorm.weight', 'model.layers.16.post_attention_layernorm.weight', 'model.layers.17.self_attn.q_proj.weight', 'model.layers.17.self_attn.k_proj.weight', 'model.layers.17.self_attn.v_proj.weight', 'model.layers.17.self_attn.o_proj.weight', 'model.layers.17.mlp.gate_proj.weight', 'model.layers.17.mlp.up_proj.weight', 'model.layers.17.mlp.down_proj.weight', 'model.layers.17.input_layernorm.weight', 'model.layers.17.post_attention_layernorm.weight', 'model.layers.18.self_attn.q_proj.weight', 'model.layers.18.self_attn.k_proj.weight', 'model.layers.18.self_attn.v_proj.weight', 'model.layers.18.self_attn.o_proj.weight', 'model.layers.18.mlp.gate_proj.weight', 'model.layers.18.mlp.up_proj.weight', 'model.layers.18.mlp.down_proj.weight', 'model.layers.18.input_layernorm.weight', 'model.layers.18.post_attention_layernorm.weight', 'model.layers.19.self_attn.q_proj.weight', 'model.layers.19.self_attn.k_proj.weight', 'model.layers.19.self_attn.v_proj.weight', 'model.layers.19.self_attn.o_proj.weight', 'model.layers.19.mlp.gate_proj.weight', 'model.layers.19.mlp.up_proj.weight', 'model.layers.19.mlp.down_proj.weight', 'model.layers.19.input_layernorm.weight', 'model.layers.19.post_attention_layernorm.weight', 'model.layers.20.self_attn.q_proj.weight', 'model.layers.20.self_attn.k_proj.weight', 'model.layers.20.self_attn.v_proj.weight', 'model.layers.20.self_attn.o_proj.weight', 'model.layers.20.mlp.gate_proj.weight', 'model.layers.20.mlp.up_proj.weight', 'model.layers.20.mlp.down_proj.weight', 'model.layers.20.input_layernorm.weight', 'model.layers.20.post_attention_layernorm.weight', 'model.layers.21.self_attn.q_proj.weight', 'model.layers.21.self_attn.k_proj.weight', 'model.layers.21.self_attn.v_proj.weight', 'model.layers.21.self_attn.o_proj.weight', 'model.layers.21.mlp.gate_proj.weight', 'model.layers.21.mlp.up_proj.weight', 'model.layers.21.mlp.down_proj.weight', 'model.layers.21.input_layernorm.weight', 'model.layers.21.post_attention_layernorm.weight', 'model.layers.22.self_attn.q_proj.weight', 'model.layers.22.self_attn.k_proj.weight', 'model.layers.22.self_attn.v_proj.weight', 'model.layers.22.self_attn.o_proj.weight', 'model.layers.22.mlp.gate_proj.weight', 'model.layers.22.mlp.up_proj.weight', 'model.layers.22.mlp.down_proj.weight', 'model.layers.22.input_layernorm.weight', 'model.layers.22.post_attention_layernorm.weight', 'model.layers.23.self_attn.q_proj.weight', 'model.layers.23.self_attn.k_proj.weight', 'model.layers.23.self_attn.v_proj.weight', 'model.layers.23.self_attn.o_proj.weight', 'model.layers.23.mlp.gate_proj.weight', 'model.layers.23.mlp.up_proj.weight', 'model.layers.23.mlp.down_proj.weight', 'model.layers.23.input_layernorm.weight', 'model.layers.23.post_attention_layernorm.weight', 'model.layers.24.self_attn.q_proj.weight', 'model.layers.24.self_attn.k_proj.weight', 'model.layers.24.self_attn.v_proj.weight', 'model.layers.24.self_attn.o_proj.weight', 'model.layers.24.mlp.gate_proj.weight', 'model.layers.24.mlp.up_proj.weight', 'model.layers.24.mlp.down_proj.weight', 'model.layers.24.input_layernorm.weight', 'model.layers.24.post_attention_layernorm.weight', 'model.layers.25.self_attn.q_proj.weight', 'model.layers.25.self_attn.k_proj.weight', 'model.layers.25.self_attn.v_proj.weight', 'model.layers.25.self_attn.o_proj.weight', 'model.layers.25.mlp.gate_proj.weight', 'model.layers.25.mlp.up_proj.weight', 'model.layers.25.mlp.down_proj.weight', 'model.layers.25.input_layernorm.weight', 'model.layers.25.post_attention_layernorm.weight', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.26.self_attn.k_proj.weight', 'model.layers.26.self_attn.v_proj.weight', 'model.layers.26.self_attn.o_proj.weight', 'model.layers.26.mlp.gate_proj.weight', 'model.layers.26.mlp.up_proj.weight', 'model.layers.26.mlp.down_proj.weight', 'model.layers.26.input_layernorm.weight', 'model.layers.26.post_attention_layernorm.weight', 'model.layers.27.self_attn.q_proj.weight', 'model.layers.27.self_attn.k_proj.weight', 'model.layers.27.self_attn.v_proj.weight', 'model.layers.27.self_attn.o_proj.weight', 'model.layers.27.mlp.gate_proj.weight', 'model.layers.27.mlp.up_proj.weight', 'model.layers.27.mlp.down_proj.weight', 'model.layers.27.input_layernorm.weight', 'model.layers.27.post_attention_layernorm.weight', 'model.layers.28.self_attn.q_proj.weight', 'model.layers.28.self_attn.k_proj.weight', 'model.layers.28.self_attn.v_proj.weight', 'model.layers.28.self_attn.o_proj.weight', 'model.layers.28.mlp.gate_proj.weight', 'model.layers.28.mlp.up_proj.weight', 'model.layers.28.mlp.down_proj.weight', 'model.layers.28.input_layernorm.weight', 'model.layers.28.post_attention_layernorm.weight', 'model.layers.29.self_attn.q_proj.weight', 'model.layers.29.self_attn.k_proj.weight', 'model.layers.29.self_attn.v_proj.weight', 'model.layers.29.self_attn.o_proj.weight', 'model.layers.29.mlp.gate_proj.weight', 'model.layers.29.mlp.up_proj.weight', 'model.layers.29.mlp.down_proj.weight', 'model.layers.29.input_layernorm.weight', 'model.layers.29.post_attention_layernorm.weight', 'model.layers.30.self_attn.q_proj.weight', 'model.layers.30.self_attn.k_proj.weight', 'model.layers.30.self_attn.v_proj.weight', 'model.layers.30.self_attn.o_proj.weight', 'model.layers.30.mlp.gate_proj.weight', 'model.layers.30.mlp.up_proj.weight', 'model.layers.30.mlp.down_proj.weight', 'model.layers.30.input_layernorm.weight', 'model.layers.30.post_attention_layernorm.weight', 'model.layers.31.self_attn.q_proj.weight', 'model.layers.31.self_attn.k_proj.weight', 'model.layers.31.self_attn.v_proj.weight', 'model.layers.31.self_attn.o_proj.weight', 'model.layers.31.mlp.gate_proj.weight', 'model.layers.31.mlp.up_proj.weight', 'model.layers.31.mlp.down_proj.weight', 'model.layers.31.input_layernorm.weight', 'model.layers.31.post_attention_layernorm.weight', 'model.norm.weight']
2024-12-27T14:06:26.823866 - Requested to load HunyuanVideoClipModel_
2024-12-27T14:06:51.391930 - !!! Exception during processing !!! Allocation on device 
2024-12-27T14:06:51.408957 - Traceback (most recent call last):
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 328, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 203, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\nodes.py", line 67, in encode
    return (clip.encode_from_tokens_scheduled(tokens), )
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 146, in encode_from_tokens_scheduled
    pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 207, in encode_from_tokens
    self.load_model()
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\sd.py", line 240, in load_model
    model_management.load_model_gpu(self.patcher)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 555, in load_model_gpu
    return load_models_gpu([model])
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 550, in load_models_gpu
    loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 366, in model_load
    self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_management.py", line 395, in model_use_more_vram
    return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 759, in partially_load
    raise e
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 756, in partially_load
    self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 78, in load
    super().load(*args, force_patch_weights=True, **kwargs)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\ComfyUI\comfy\model_patcher.py", line 607, in load
    x[2].to(device_to)
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "C:\Users\bluen\Documents\AI\ComfyUI_Portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
    return t.to(
           ^^^^^
torch.OutOfMemoryError: Allocation on device 

2024-12-27T14:06:51.410958 - Got an OOM, unloading all loaded models.
2024-12-27T14:06:53.219802 - Prompt executed in 36.63 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":90,"last_link_id":119,"nodes":[{"id":77,"type":"UnetLoaderGGUF","pos":[-370,-190],"size":[315,58],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[103,104],"slot_index":0}],"properties":{"Node name for S&R":"UnetLoaderGGUF"},"widgets_values":["hyunyuan\\hunyuan-video-t2v-720p-Q4_K_M.gguf"]},{"id":76,"type":"CLIPTextEncode","pos":[-400,100],"size":[400,200],"flags":{},"order":8,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":102}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[106],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["A curious racoon peers through a virant field of yellow sunflowers, its eyes with interest. The playful yet serene atmosphere is complemented by soft natural lioght filtering through the petals. Mid-shot, warm and cheerful tones."]},{"id":82,"type":"FluxGuidance","pos":[80,100],"size":[315,58],"flags":{},"order":9,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":106}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[112],"slot_index":0}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[10]},{"id":80,"type":"BasicScheduler","pos":[80,-70],"size":[315,106],"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":104}],"outputs":[{"name":"SIGMAS","type":"SIGMAS","links":[115],"slot_index":0}],"properties":{"Node name for S&R":"BasicScheduler"},"widgets_values":["simple",10,1]},{"id":78,"type":"ModelSamplingSD3","pos":[70,-190],"size":[315,58],"flags":{},"order":6,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":103}],"outputs":[{"name":"MODEL","type":"MODEL","links":[109],"slot_index":0}],"properties":{"Node name for S&R":"ModelSamplingSD3"},"widgets_values":[7]},{"id":87,"type":"RandomNoise","pos":[450,-210],"size":[315,82],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[114]}],"properties":{"Node name for S&R":"RandomNoise"},"widgets_values":[224744969770628,"randomize"]},{"id":85,"type":"BasicGuider","pos":[460,-80],"size":[210,46],"flags":{},"order":10,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":109},{"name":"conditioning","type":"CONDITIONING","link":112}],"outputs":[{"name":"GUIDER","type":"GUIDER","links":[110],"slot_index":0}],"properties":{"Node name for S&R":"BasicGuider"}},{"id":84,"type":"KSamplerSelect","pos":[460,10],"size":[315,58],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"SAMPLER","type":"SAMPLER","links":[111],"slot_index":0}],"properties":{"Node name for S&R":"KSamplerSelect"},"widgets_values":["euler"]},{"id":83,"type":"EmptyHunyuanLatentVideo","pos":[460,120],"size":[315,130],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[113],"slot_index":0}],"properties":{"Node name for S&R":"EmptyHunyuanLatentVideo"},"widgets_values":[848,480,25,1]},{"id":86,"type":"SamplerCustomAdvanced","pos":[810,-40],"size":[216.59999084472656,106],"flags":{},"order":11,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":114},{"name":"guider","type":"GUIDER","link":110},{"name":"sampler","type":"SAMPLER","link":111},{"name":"sigmas","type":"SIGMAS","link":115},{"name":"latent_image","type":"LATENT","link":113}],"outputs":[{"name":"output","type":"LATENT","links":[117],"slot_index":0},{"name":"denoised_output","type":"LATENT","links":null}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"}},{"id":89,"type":"VAEDecodeTiled","pos":[1070,-40],"size":[315,150],"flags":{},"order":12,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":117},{"name":"vae","type":"VAE","link":119}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[118],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecodeTiled"},"widgets_values":[512,64,64,8]},{"id":66,"type":"VHS_VideoCombine","pos":[1430,-60],"size":[330,334],"flags":{},"order":13,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":118},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"HY/HunyuanVideo","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_00041.mp4","subfolder":"HY","type":"output","format":"video/h264-mp4","frame_rate":16,"workflow":"HunyuanVideo_00041.png","fullpath":"C:\\Users\\dseditor\\CUI\\ComfyUI\\output\\HY\\HunyuanVideo_00041.mp4"},"muted":false}}},{"id":90,"type":"HyVideoVAELoader","pos":[450,310],"size":[315,82],"flags":{},"order":4,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"vae","type":"VAE","links":[119],"slot_index":0}],"properties":{"Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan\\hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":75,"type":"DualCLIPLoaderGGUF","pos":[-380,-70],"size":[315,106],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[102],"slot_index":0}],"properties":{"Node name for S&R":"DualCLIPLoaderGGUF"},"widgets_values":["long_clip\\ViT-L-14-BEST-smooth-GmP-ft.safetensors","t5\\t5xxl_fp8_e4m3fn_scaled.safetensors","hunyuan_video"]}],"links":[[102,75,0,76,0,"CLIP"],[103,77,0,78,0,"MODEL"],[104,77,0,80,0,"MODEL"],[106,76,0,82,0,"CONDITIONING"],[109,78,0,85,0,"MODEL"],[110,85,0,86,1,"GUIDER"],[111,84,0,86,2,"SAMPLER"],[112,82,0,85,1,"CONDITIONING"],[113,83,0,86,4,"LATENT"],[114,87,0,86,0,"NOISE"],[115,80,0,86,3,"SIGMAS"],[117,86,0,89,0,"LATENT"],[118,89,0,66,0,"IMAGE"],[119,90,0,89,1,"VAE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.2839025177495056,"offset":[423.97445101909227,319.3569779917318]},"ue_links":[]},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@Slavich86
Copy link

Here's how I solved the problem: python3 main.py --lowvram

@gambikules
Copy link

Here's how I solved the problem: python3 main.py --lowvram

Where ? How?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Potential Bug User is reporting a bug. This should be tested.
Projects
None yet
Development

No branches or pull requests

4 participants