Skip to content

Update distorch_2.py: Fixed AttributeError: 'GGUFModelPatcher' object has no attribute 'model_patches_models' #124

Closed
max-solo23 wants to merge 1 commit intopollockjj:mainfrom
max-solo23:patch-1
Closed

Update distorch_2.py: Fixed AttributeError: 'GGUFModelPatcher' object has no attribute 'model_patches_models' #124
max-solo23 wants to merge 1 commit intopollockjj:mainfrom
max-solo23:patch-1

Conversation

@max-solo23
Copy link
Copy Markdown

@max-solo23 max-solo23 commented Oct 11, 2025

This pull request fixes the CLIPTextEncode error:
‘GGUFModelPatcher’ object has no attribute ‘model_patches_models’.

The issue occurred when using GGUF text encoders with MultiGPU enabled.
The patch adds compatibility handling to prevent this AttributeError.

Please review and accept this fix — it restores GGUF CLIPTextEncode functionality in ComfyUI with MultiGPU.
Screenshot of the error is attached below.

model_patches_models

Summary by Sourcery

Bug Fixes:

  • Prevent AttributeError in GGUFModelPatcher by using model_patches_to with getattr(m, 'load_device', 'cuda') instead of model_patches_models

Added a check before iterating over model_patches_to() to prevent TypeError 
when the function returns None for certain GGUF patchers. 
This improves compatibility with single-GPU setups and GGUF models.
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Oct 11, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

The PR updates distorch_2.py to handle missing model_patches_models by invoking model_patches_to with a fallback load_device, ensuring GGUF text encoders work under MultiGPU without raising AttributeError.

Sequence diagram for patched model patching process in MultiGPU

sequenceDiagram
participant PatchedLoader as "patched_load_models_gpu()"
participant Model as "Model/GGUFModelPatcher"
PatchedLoader->>Model: get load_device (default: "cuda")
PatchedLoader->>Model: call model_patches_to(load_device)
alt patches exist
  Model-->>PatchedLoader: return patches
  PatchedLoader->>PatchedLoader: add patches to models_temp
else no patches
  Model-->>PatchedLoader: return None
end
Loading

Class diagram for updated model patching in distorch_2.py

classDiagram
class GGUFModelPatcher {
  +load_device
  +model_patches_to(device)
}
class Model {
  +model_patches_to(device)
}
GGUFModelPatcher <|-- Model
GGUFModelPatcher : model_patches_to(device)
Model : model_patches_to(device)
Loading

File-Level Changes

Change Details Files
Add compatibility handling for model patch retrieval to avoid AttributeError when model_patches_models is missing
  • Replaced call to m.model_patches_models() with patches = m.model_patches_to(...)
  • Used getattr(m, "load_device", "cuda") to determine target device
  • Iterated over patches only when non-empty to populate models_temp
distorch_2.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@pollockjj pollockjj requested a review from Copilot October 13, 2025 16:03
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request fixes an AttributeError in the distorch_2.py file where GGUFModelPatcher objects don't have the model_patches_models method, preventing GGUF text encoders from working with MultiGPU enabled.

  • Replaces direct call to model_patches_models() with model_patches_to() method using device fallback
  • Adds null check before iterating over patches to prevent errors

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment thread distorch_2.py
models_temp.add(m)
for mm_patch in m.model_patches_models():
models_temp.add(mm_patch)
patches = m.model_patches_to(getattr(m, "load_device", "cuda"))
Copy link

Copilot AI Oct 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The hardcoded default device 'cuda' may not be appropriate for all environments. Consider using a more robust device detection mechanism or making this configurable.

Suggested change
patches = m.model_patches_to(getattr(m, "load_device", "cuda"))
default_device = "cuda" if torch.cuda.is_available() else "cpu"
patches = m.model_patches_to(getattr(m, "load_device", default_device))

Copilot uses AI. Check for mistakes.
@pollockjj
Copy link
Copy Markdown
Owner

Thanks for submitting this PR, @max-solo23.

It is a good PR and addressed a bug that needs to get fixed. I am going to reject it for the following reasons:

  1. Your PR implements an incomplete (only "cuda" where we support all torch device types) solution
  2. Your PR implements what are good defensive practices, in a place where they are not aligned with the repo's "fail loud" philosophy. That compute_device comes directly from a function I construct so I consider it known-good. If ComfyUI-MultiGPU were to somehow populate a device that Comfy, say, stopped supporting, I'd want to know immediately so I can get back in re-alignment with Comfy Core.

Expect to see a fix pushed today. I won't close this until I do.

@pollockjj
Copy link
Copy Markdown
Owner

Also, I can't get the old method to fail with my current testing workflows.

Care to share the workflow (or at least the model) that is giving the current code issues? The good news is that at least with the models I am testing I see no difference or failures so I am inclined to release.

Cheers!

pollockjj added a commit that referenced this pull request Oct 13, 2025
…t has no attribute ‘model_patches_models’.

Co-authored-by: Maksym Solomyanov <162438990+max-solo23@users.noreply.github.com>
pollockjj added a commit that referenced this pull request Oct 13, 2025
…t has no attribute ‘model_patches_models’.

Co-authored-by: max-solo23
@pollockjj
Copy link
Copy Markdown
Owner

Added hotfix. Tried to give you Co-Author but it gave it to some other user and I am struggling to fix. My apologies. It is in the commit log as max-solo23. If you can give me an email address that will register as you, happy to amend the commit message.

Cheers!

@pollockjj pollockjj closed this Oct 13, 2025
@max-solo23
Copy link
Copy Markdown
Author

Thanks for the detailed feedback.

I understand the logic behind keeping "fail loud" approach and limiting defensive checks. I'll align future contributions with that philosophy.

For attribution, you can use this email for the commit: maksym.solomyanov@gmail.com

Regarding reproducibility, I'll share the workflow and a short video below.

vace_v2v_example_workflow_MULTIGPU.json

error.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants