Issue: Inconsistency in CUDA Device Setting (Device 1 vs. Device 0)
Technical Environment as per ComfyUI:
- Set cuda device to: 1 (?)
- Total VRAM: 12187 MB
- Total RAM: 128490 MB
- xformers version: 0.0.22
- VRAM State: NORMAL_VRAM
- Current Device: cuda:0 (?) NVIDIA TITAN Xp COLLECTORS EDITION (using
cudaMallocAsync)

Originally posted by @Luxcium in Comfy-Org/ComfyUI#2396 (comment)
Issue Description:
I am experiencing an inconsistency when attempting to assign operations to a specific CUDA device. Despite explicitly setting the CUDA device to 1, the system reports that it is utilizing device 0, as indicated in the following output:
Set cuda device to: 1
...
Device: cuda:0 NVIDIA TITAN Xp COLLECTORS EDITION : cudaMallocAsync
This discrepancy is concerning, particularly because it has led to a complete system shutdown, similar to what one would experience during a power outage. Initially, this made me question whether dual GPU operation was feasible, but the problem persists as a misdirection of processing to a single GPU.
The exact location in the code where this device assignment discrepancy occurs is unclear to me. My hypothesis was that it could be addressed in comfy/model_management.py#L73, but my attempts to resolve it have not been successful.
This issue has left me at an impasse, unable to determine a solution, either for a local fix or for a pull request. I am hoping that this description will bring to light an easily correctable configuration error for those more familiar with the intricacies of CUDA device management.
My assistant is not always perfect but he helped me to write this issue delving into the realm of tapestry where a symphony of... well you get the idea...
Issue: Inconsistency in CUDA Device Setting (Device 1 vs. Device 0)
Technical Environment as per ComfyUI:
cudaMallocAsync)Originally posted by @Luxcium in Comfy-Org/ComfyUI#2396 (comment)
Issue Description:
I am experiencing an inconsistency when attempting to assign operations to a specific CUDA device. Despite explicitly setting the CUDA device to 1, the system reports that it is utilizing device 0, as indicated in the following output:
This discrepancy is concerning, particularly because it has led to a complete system shutdown, similar to what one would experience during a power outage. Initially, this made me question whether dual GPU operation was feasible, but the problem persists as a misdirection of processing to a single GPU.
The exact location in the code where this device assignment discrepancy occurs is unclear to me. My hypothesis was that it could be addressed in comfy/model_management.py#L73, but my attempts to resolve it have not been successful.
This issue has left me at an impasse, unable to determine a solution, either for a local fix or for a pull request. I am hoping that this description will bring to light an easily correctable configuration error for those more familiar with the intricacies of CUDA device management.
My assistant is not always perfect but he helped me to write this issue delving into the realm of tapestry where a symphony of... well you get the idea...