Skip to content

make control-net load order deterministic (CORE-139)#13701

Merged
comfyanonymous merged 2 commits intoComfy-Org:masterfrom
rattus128:prs/deterministic-control-load
May 4, 2026
Merged

make control-net load order deterministic (CORE-139)#13701
comfyanonymous merged 2 commits intoComfy-Org:masterfrom
rattus128:prs/deterministic-control-load

Conversation

@rattus128
Copy link
Copy Markdown
Contributor

@rattus128 rattus128 commented May 4, 2026

#13668

Make this deterministic so speeds dont change base of load order. Load them in reverse order so whatever the caller lists first is the top priority.

I never reproduced this, however we should make this deterministic to deconfuse performance. Use the OPs preference to load the diffusion model over the control net.

Example test conditions:

Windows, RTX5060, PCIE2x8 (bus downgrade)
SDXL + controlnet with latent2rgb preview

scr
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:30<00:00,  1.00s/it]
Requested to load AutoencoderKL
0 models unloaded.
Model AutoencoderKL prepared for dynamic VRAM loading. 159MB Staged. 0 patches attached.
Prompt executed in 37.47 seconds

Load pattern matches OPs preferred load as per the report ^^ and is now deterministic.

Make this deterministic so speeds dont change base of load order. Load
them in reverse order so whatever the caller lists first is the top
priority.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 4, 2026

📝 Walkthrough

Walkthrough

This pull request updates model deduplication logic in two modules. In comfy/model_management.py, the load_models_gpu function replaces set-based deduplication with an order-preserving dictionary approach, followed by explicit list reversal. In comfy/sampler_helpers.py, the get_additional_models function replaces set-based control net deduplication with dict.fromkeys() to preserve iteration order. Both changes maintain the deduplication intent while ensuring deterministic processing order across runs.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: making ControlNet load order deterministic, with reference to the issue number.
Description check ✅ Passed The description is directly related to the changeset, explaining the motivation for deterministic model loading and providing test conditions and expected behavior.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 6/8 reviews remaining, refill in 11 minutes and 46 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/model_management.py`:
- Around line 731-732: The current reversal of models (models =
list(models_temp); models.reverse()) in the model-loading path inverts the
intended top-priority semantics so the diffusion model is loaded last and ends
up at index 0 in current_loaded_models; remove the reverse() call so models
preserves the original caller order (diffusion first), which ensures the
diffusion model is staged with higher get_free_memory() for lowvram_model_memory
and gets a higher index to avoid eviction tie-breaks. Update any references to
models/models_temp to use the preserved insertion order and verify downstream
logic that uses current_loaded_models indices remains consistent with this
change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: be360d78-5b18-4ed4-b008-52bd818da137

📥 Commits

Reviewing files that changed from the base of the PR and between f3ea976 and 9d9e0c5.

📒 Files selected for processing (2)
  • comfy/model_management.py
  • comfy/sampler_helpers.py

Comment thread comfy/model_management.py
Comment on lines +731 to +732
models = list(models_temp)
models.reverse()
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

models.reverse() inverts the intended "top priority" semantics

With the reversal, the diffusion model (first in the caller's list) is loaded last and ends up at index 0 of current_loaded_models. This has two counterproductive effects:

  1. VRAM staging (low/normal VRAM modes): lowvram_model_memory is calculated from the current free VRAM at load time. ControlNets load first and consume VRAM, so by the time the diffusion model is loaded, get_free_memory() is lower, reducing its pre-staged weight allocation — the opposite of preferential treatment.

  2. Eviction tiebreaker: free_memory sorts candidates as (-offloaded_mem, refcount, model_memory, i) ascending; the smallest i is evicted first. Index 0 (the diffusion model after reversal) is therefore the first candidate for eviction when other factors tie, not the last.

Removing the .reverse() call still produces deterministic ordering (Python 3.7+ guarantees dict preserves insertion order as an official language feature) and gives the diffusion model the best VRAM staging opportunity and the highest eviction protection index (it loads first → gets pushed to the highest index as subsequent models are inserted at 0).

✂️ Proposed fix
 models = list(models_temp)
-models.reverse()

Or, equivalently collapsed to a single expression:

-models = list(models_temp)
-models.reverse()
+models = list(reversed(models_temp))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/model_management.py` around lines 731 - 732, The current reversal of
models (models = list(models_temp); models.reverse()) in the model-loading path
inverts the intended top-priority semantics so the diffusion model is loaded
last and ends up at index 0 in current_loaded_models; remove the reverse() call
so models preserves the original caller order (diffusion first), which ensures
the diffusion model is staged with higher get_free_memory() for
lowvram_model_memory and gets a higher index to avoid eviction tie-breaks.
Update any references to models/models_temp to use the preserved insertion order
and verify downstream logic that uses current_loaded_models indices remains
consistent with this change.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the real problem is the semantics of comfy-aimdo. Whichever model gets loaded last is the top priority in aimdo.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rattus128, thanks for the clarification — that makes the intent of models.reverse() clear. Since comfy-aimdo treats the last-loaded model as the top priority, reversing the caller's list ensures the diffusion model (passed first by the caller) is loaded last and therefore gets the highest priority in aimdo. My original concern about eviction index ordering doesn't apply here because aimdo's own priority tracking supersedes the current_loaded_models index tiebreaker logic. The change is correct as written.

(\(\
( -.-)
o_(")(")   *sniffs the air curiously*

✏️ Learnings added
Learnt from: rattus128
Repo: Comfy-Org/ComfyUI PR: 13701
File: comfy/model_management.py:731-732
Timestamp: 2026-05-04T17:10:08.058Z
Learning: In comfy-aimdo (comfy_aimdo), the model that is loaded **last** in `load_models_gpu` is treated as the top priority. Therefore, to give the diffusion model (passed first by the caller) the highest aimdo priority, the `models` list must be reversed before the load loop — so the diffusion model is loaded last. This is why `models.reverse()` is intentional in `comfy/model_management.py` when aimdo is involved.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: pythongosssss
Repo: Comfy-Org/ComfyUI PR: 12555
File: comfy_extras/nodes_glsl.py:719-724
Timestamp: 2026-02-21T14:01:41.482Z
Learning: In PyOpenGL, bare Python scalars can be accepted for 1-element array parameters by NumberHandler. This means you can pass an int/float directly to OpenGL texture deletion (e.g., glDeleteTextures(tex)) without wrapping in a list. Verify function-specific expectations and ensure types match what the OpenGL call expects; use explicit lists only when the API requires an array.

Copy link
Copy Markdown
Member

@alexisrolland alexisrolland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested and this works fine.

@comfyanonymous comfyanonymous merged commit 1ac7818 into Comfy-Org:master May 4, 2026
14 checks passed
@Kargim
Copy link
Copy Markdown

Kargim commented May 5, 2026

@rattus128 Sorry for going off-topic, but I really like your VRAM and other stats dashboard. Is that a custom node or a built-in tool?

@alexisrolland
Copy link
Copy Markdown
Member

@rattus128 Sorry for going off-topic, but I really like your VRAM and other stats dashboard. Is that a custom node or a built-in tool?

It is this custom node that was created by Kijai specifically to visualize dynamic vRAM allocation https://github.com/kijai/ComfyUI-MemoryVisualization

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants