Update distorch_2.py: Fixed AttributeError: 'GGUFModelPatcher' object has no attribute 'model_patches_models' #124
Update distorch_2.py: Fixed AttributeError: 'GGUFModelPatcher' object has no attribute 'model_patches_models' #124max-solo23 wants to merge 1 commit intopollockjj:mainfrom
Conversation
Added a check before iterating over model_patches_to() to prevent TypeError when the function returns None for certain GGUF patchers. This improves compatibility with single-GPU setups and GGUF models.
Reviewer's guide (collapsed on small PRs)Reviewer's GuideThe PR updates distorch_2.py to handle missing model_patches_models by invoking model_patches_to with a fallback load_device, ensuring GGUF text encoders work under MultiGPU without raising AttributeError. Sequence diagram for patched model patching process in MultiGPUsequenceDiagram
participant PatchedLoader as "patched_load_models_gpu()"
participant Model as "Model/GGUFModelPatcher"
PatchedLoader->>Model: get load_device (default: "cuda")
PatchedLoader->>Model: call model_patches_to(load_device)
alt patches exist
Model-->>PatchedLoader: return patches
PatchedLoader->>PatchedLoader: add patches to models_temp
else no patches
Model-->>PatchedLoader: return None
end
Class diagram for updated model patching in distorch_2.pyclassDiagram
class GGUFModelPatcher {
+load_device
+model_patches_to(device)
}
class Model {
+model_patches_to(device)
}
GGUFModelPatcher <|-- Model
GGUFModelPatcher : model_patches_to(device)
Model : model_patches_to(device)
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Pull Request Overview
This pull request fixes an AttributeError in the distorch_2.py file where GGUFModelPatcher objects don't have the model_patches_models method, preventing GGUF text encoders from working with MultiGPU enabled.
- Replaces direct call to
model_patches_models()withmodel_patches_to()method using device fallback - Adds null check before iterating over patches to prevent errors
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| models_temp.add(m) | ||
| for mm_patch in m.model_patches_models(): | ||
| models_temp.add(mm_patch) | ||
| patches = m.model_patches_to(getattr(m, "load_device", "cuda")) |
There was a problem hiding this comment.
[nitpick] The hardcoded default device 'cuda' may not be appropriate for all environments. Consider using a more robust device detection mechanism or making this configurable.
| patches = m.model_patches_to(getattr(m, "load_device", "cuda")) | |
| default_device = "cuda" if torch.cuda.is_available() else "cpu" | |
| patches = m.model_patches_to(getattr(m, "load_device", default_device)) |
|
Thanks for submitting this PR, @max-solo23. It is a good PR and addressed a bug that needs to get fixed. I am going to reject it for the following reasons:
Expect to see a fix pushed today. I won't close this until I do. |
|
Also, I can't get the old method to fail with my current testing workflows. Care to share the workflow (or at least the model) that is giving the current code issues? The good news is that at least with the models I am testing I see no difference or failures so I am inclined to release. Cheers! |
…t has no attribute ‘model_patches_models’. Co-authored-by: Maksym Solomyanov <162438990+max-solo23@users.noreply.github.com>
…t has no attribute ‘model_patches_models’. Co-authored-by: max-solo23
|
Added hotfix. Tried to give you Co-Author but it gave it to some other user and I am struggling to fix. My apologies. It is in the commit log as max-solo23. If you can give me an email address that will register as you, happy to amend the commit message. Cheers! |
|
Thanks for the detailed feedback. I understand the logic behind keeping "fail loud" approach and limiting defensive checks. I'll align future contributions with that philosophy. For attribution, you can use this email for the commit: maksym.solomyanov@gmail.com Regarding reproducibility, I'll share the workflow and a short video below. vace_v2v_example_workflow_MULTIGPU.json error.mp4 |
This pull request fixes the CLIPTextEncode error:
‘GGUFModelPatcher’ object has no attribute ‘model_patches_models’.
The issue occurred when using GGUF text encoders with MultiGPU enabled.
The patch adds compatibility handling to prevent this AttributeError.
Please review and accept this fix — it restores GGUF CLIPTextEncode functionality in ComfyUI with MultiGPU.
Screenshot of the error is attached below.
Summary by Sourcery
Bug Fixes: