-
Notifications
You must be signed in to change notification settings - Fork 8.1k
DualCLIPLoader Allocation on device / hunyuan #6222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The problem has been solved. I accidental picked the wrong text model (FP16 (15GB) instead of FP8 (8GB). Somehow both files had the same filename model.safetensor. There should be a rule for naming models files more accurate on huggingface. |
same problem ComfyUI Error ReportError Details
Stack Trace
System Information
Devices
Logs
Attached WorkflowPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.
Additional Context(Please add any additional context or steps to reproduce the error here) |
Here's how I solved the problem: python3 main.py --lowvram |
Where ? How? |
Expected Behavior
Loading the two text encoders (it worked a few days ago, maybe some update broke it)
Actual Behavior
OOM
Steps to Reproduce
I am using the standardworkflow for Hunyuan
Debug Logs
System Information
Devices
Logs
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
The text was updated successfully, but these errors were encountered: