Conversation
📝 WalkthroughWalkthroughA new 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy/model_management.py`:
- Around line 1735-1748: supports_fp64 currently accepts device=None but never
resolves None to the active device, causing MPS to be misdetected; update
supports_fp64 to resolve a None device to the current/active device before
calling is_device_mps (e.g., obtain torch.cuda.current_device()/torch.device or
use the existing project helper that returns the active device), then run the
existing checks (is_device_mps, is_intel_xpu, is_directml_enabled, is_ixuca)
against that resolved device so MPS is detected correctly and FP64 support is
reported accurately.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 104a9aad-a218-4140-b887-3d8e550656aa
📒 Files selected for processing (2)
comfy/ldm/flux/math.pycomfy/model_management.py
| def supports_fp64(device=None): | ||
| if is_device_mps(device): | ||
| return False | ||
|
|
||
| if is_intel_xpu(): | ||
| return False | ||
|
|
||
| if is_directml_enabled(): | ||
| return False | ||
|
|
||
| if is_ixuca(): | ||
| return False | ||
|
|
||
| return True |
There was a problem hiding this comment.
device=None currently reports FP64 support incorrectly on MPS.
Line 1735 introduces device=None, but the function never resolves None to the active device. In MPS mode, supports_fp64() returns True, which can route FP64 ops to an unsupported backend.
Proposed fix
def supports_fp64(device=None):
+ if device is None:
+ device = get_torch_device()
+
if is_device_mps(device):
return False
if is_intel_xpu():
return False📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def supports_fp64(device=None): | |
| if is_device_mps(device): | |
| return False | |
| if is_intel_xpu(): | |
| return False | |
| if is_directml_enabled(): | |
| return False | |
| if is_ixuca(): | |
| return False | |
| return True | |
| def supports_fp64(device=None): | |
| if device is None: | |
| device = get_torch_device() | |
| if is_device_mps(device): | |
| return False | |
| if is_intel_xpu(): | |
| return False | |
| if is_directml_enabled(): | |
| return False | |
| if is_ixuca(): | |
| return False | |
| return True |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy/model_management.py` around lines 1735 - 1748, supports_fp64 currently
accepts device=None but never resolves None to the active device, causing MPS to
be misdetected; update supports_fp64 to resolve a None device to the
current/active device before calling is_device_mps (e.g., obtain
torch.cuda.current_device()/torch.device or use the existing project helper that
returns the active device), then run the existing checks (is_device_mps,
is_intel_xpu, is_directml_enabled, is_ixuca) against that resolved device so MPS
is detected correctly and FP64 support is reported accurately.
No description provided.