We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
device
dtype
elif device == "cuda-fp16": qwen_device = "cuda" qwen_model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).half().cuda()
Can we also separate device and tensor type(dtype)?
This is screen from the SUPIR node:
SUPIR
Otherwise you will have to make a huge list like:
cuda-fp32 cuda-bf16 cuda-fp16 cuda-fp8
cuda-fp32
cuda-bf16
cuda-fp16
cuda-fp8
mps-fp32 mps-bf16 mps-fp16
mps-fp32
mps-bf16
mps-fp16
and so on probably for other Hardware Accelerators(like xpu) too.
xpu
Selecting separatly device and DType will be the best option, imho.
Also usually Node should use that device that is used by ComfyUI - maybe we can add an "auto" option for device and set it as default one.
The text was updated successfully, but these errors were encountered:
That makes sense, it should indeed be separated. I’ll make the change when I have time. Thank you for your suggestion!
Sorry, something went wrong.
Solved!
No branches or pull requests
Can we also separate device and tensor type(dtype)?
This is screen from the
SUPIR
node:Otherwise you will have to make a huge list like:
cuda-fp32
cuda-bf16
cuda-fp16
cuda-fp8
mps-fp32
mps-bf16
mps-fp16
and so on probably for other Hardware Accelerators(like
xpu
) too.Selecting separatly device and DType will be the best option, imho.
Also usually Node should use that device that is used by ComfyUI - maybe we can add an "auto" option for device and set it as default one.
The text was updated successfully, but these errors were encountered: