Conversation
📝 WalkthroughWalkthroughThis pull request refactors the model detection mechanism from a boolean-matching approach to a numeric scoring system. A new 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests-unit/comfy_test/model_detection_test.py (1)
66-94: ⚡ Quick winAdd one regression for the
required_keysscoring path.This test only proves the higher-specificity
unet_configcase. The new behavior also depends onrequired_keyswhenstate_dictis present, so a regression in that branch would still leave this test green. Please add one case that distinguishes two overlapping configs only byrequired_keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy/model_detection.py`:
- Around line 907-917: The current loop in model_config_from_unet_config calls
model_config.match_score(unet_config, state_dict) unconditionally, breaking
subclasses that only implemented matches(); change the selection logic to call
match_score when the concrete class has overridden it (e.g. 'match_score' in
model_config.__class__.__dict__), otherwise call the legacy matches(unet_config)
hook (and use its return to derive a compatible score or boolean), keeping the
rest of the best_score/best_match logic intact; reference the function
model_config_from_unet_config and methods match_score and matches on classes
coming from comfy.supported_models.models.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 54889536-e157-4459-a7be-4ef36ed963dc
📒 Files selected for processing (3)
comfy/model_detection.pycomfy/supported_models_base.pytests-unit/comfy_test/model_detection_test.py
| def model_config_from_unet_config(unet_config, state_dict=None): | ||
| best_match = None | ||
| best_score = -1 | ||
| for model_config in comfy.supported_models.models: | ||
| if model_config.matches(unet_config, state_dict): | ||
| return model_config(unet_config) | ||
| score = model_config.match_score(unet_config, state_dict) | ||
| if score > best_score: | ||
| best_score = score | ||
| best_match = model_config | ||
|
|
||
| if best_match is not None: | ||
| return best_match(unet_config) |
There was a problem hiding this comment.
Preserve the legacy matches() hook for external model configs.
Line 911 now calls match_score() unconditionally. Any custom model class that previously overrode matches() but not match_score() will silently lose its detection logic and may stop matching after this lands. Please keep a compatibility fallback to matches() when a subclass has not implemented match_score() yet.
Compatibility-preserving fallback
def model_config_from_unet_config(unet_config, state_dict=None):
best_match = None
best_score = -1
for model_config in comfy.supported_models.models:
- score = model_config.match_score(unet_config, state_dict)
+ if "match_score" not in model_config.__dict__ and "matches" in model_config.__dict__:
+ score = 0 if model_config.matches(unet_config, state_dict) else -1
+ else:
+ score = model_config.match_score(unet_config, state_dict)
if score > best_score:
best_score = score
best_match = model_configAs per coding guidelines, "comfy/**: Core ML/diffusion engine. Focus on: Backward compatibility (breaking changes affect all custom nodes)`."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@comfy/model_detection.py` around lines 907 - 917, The current loop in
model_config_from_unet_config calls model_config.match_score(unet_config,
state_dict) unconditionally, breaking subclasses that only implemented
matches(); change the selection logic to call match_score when the concrete
class has overridden it (e.g. 'match_score' in model_config.__class__.__dict__),
otherwise call the legacy matches(unet_config) hook (and use its return to
derive a compatible score or boolean), keeping the rest of the
best_score/best_match logic intact; reference the function
model_config_from_unet_config and methods match_score and matches on classes
coming from comfy.supported_models.models.
This is a suggestion which would eliminate the need of keeping specific order of models array. Pre-requisite is also #13654 .
I asked Claude to analyze if this change could cause any issues:
Things to be aware of
These two share an identical 6-key unet_config (context_dim=768, model_channels=320, …, in_channels=8). They're disambiguated only by Stable_Zero123.required_keys = {cc_projection.weight, cc_projection.bias}.
Old: first-in-list (Stable_Zero123 at index 1) wins regardless of whether cc_projection.* is in the state dict.
New (with state_dict): Stable_Zero123 scores 8 (6 + 2), SD15_instructpix2pix scores 6 → Stable_Zero123 wins iff cc_projection.* is actually present, which is the correct behaviour. Loading an instructpix2pix checkpoint via model_config_from_unet will now correctly select SD15_instructpix2pix instead of misidentifying it as Stable_Zero123.
This is a behavioural change worth knowing about, but it's a strict improvement.
Diffusers path is unchanged-but-not-improved
model_config_from_diffusers_unet calls model_config_from_unet_config(unet_config) without a state_dict (comfy/model_detection.py:1155). With state_dict=None, required_keys are not scored — so the Stable_Zero123 / SD15_instructpix2pix ambiguity remains unresolved on that path. This is identical to the previous behavior (not a regression), but worth documenting.
Score-0 catch-all impossible
The default BASE.unet_config = {} would score 0 (matches anything). I confirmed no entry in comfy.supported_models.models has an empty unet_config, so a generic BASE can never out-rank a specific match. A score of 0 only "wins" if literally no model matched — which is the same fall-through behavior as before.
No external .matches overrides
I grepped — BASE.matches is the only definition, with one caller (model_detection.py). Refactoring matches into match_score >= 0 is API-compatible.
Performance
Old: O(n) with early exit. New: always O(n). With ~80 model classes evaluated once per checkpoint load, this is unmeasurable.
unet_extra_config is applied post-selection
BASE.init merges unet_extra_config into self.unet_config only after the model is chosen, so match_score correctly scores against the class-level unet_config only. No accidental matches via extras.
List ordering still loosely matters as a tiebreaker
If two models truly have identical (unet_config, required_keys), list order decides. The (now-on-another-branch) test_unet_config_and_required_keys_combination_is_unique is the right safeguard for that case Add test that each model has unique identifiers CORE-134 #13654 .
Verdict
The change is safe. The only behavioural difference beyond the intended fix is the Stable_Zero123 / SD15_instructpix2pix improvement in 2., which is a bug fix in its own right.