Skip to content

feat: Add reference latent support for Anima#13392

Open
levzzz5154 wants to merge 6 commits intoComfy-Org:masterfrom
levzzz5154:ref-latents-anima-pr
Open

feat: Add reference latent support for Anima#13392
levzzz5154 wants to merge 6 commits intoComfy-Org:masterfrom
levzzz5154:ref-latents-anima-pr

Conversation

@levzzz5154
Copy link
Copy Markdown

Flux 2 style ReferenceLatent implementation for Anima. Needed to use editing-capable LoRAs or finetunes of Anima in ComfyUI.
As an example, a canny control LoRA: https://civitai.com/models/2443202/anima-canny-control-lora-controlnet-like

Example image:
animagrid_00004_

Example workflow: Anima-RefLatent.json
Training code used to produce the example LoRA: https://github.com/levzzz5154/diffusion-pipe

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 13, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e52c41a6-9001-4bed-9f04-39279f896955

📥 Commits

Reviewing files that changed from the base of the PR and between 312adfc and ebdeb9c.

📒 Files selected for processing (1)
  • comfy/model_base.py

📝 Walkthrough

Walkthrough

The PR adds reference-latent handling in two places. In MiniTrainDIT._forward (comfy/ldm/cosmos/predict2.py) it reads ref_latents from kwargs, optionally unsqueezes 4D latents, casts them to the input tensor's dtype/device, and concatenates each along dimension 2 before the existing padding/embedding/transformer path. In Anima (comfy/model_base.py) it adds memory_usage_factor_conds = ("ref_latents",), extends extra_conds() to produce out['ref_latents'] as a CONDList when reference_latents is provided, and adds extra_conds_shapes() to report corresponding shapes.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: Add reference latent support for Anima' clearly and concisely summarizes the main change: adding reference latent support to the Anima model.
Description check ✅ Passed The PR description provides context about the feature (Flux 2 style ReferenceLatent implementation), its purpose (enabling editing-capable LoRAs for Anima), and includes supporting materials (example workflow, training code, and demonstration image).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/model_base.py`:
- Around line 1225-1230: The new handling of ref_latents (produced in
ref_latents -> out['ref_latents'] via self.process_latent_in) is not being
accounted for in memory estimation; update the model's memory accounting by
registering ref_latents in memory_usage_factor_conds and reporting their shapes
in extra_conds_shapes() so memory_required() includes them; specifically, add an
entry for the same key/name used when creating out['ref_latents'] into
memory_usage_factor_conds with an appropriate scaling factor and ensure
extra_conds_shapes() returns the tensor shape(s) produced by process_latent_in
for ref_latents (matching how other reference-latent models are handled), so
MiniTrainDIT._forward()’s concatenation onto x is reflected in VRAM estimates.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5674b5de-8293-43a2-8672-19dfdd0b4fb5

📥 Commits

Reviewing files that changed from the base of the PR and between acd7185 and 312adfc.

📒 Files selected for processing (2)
  • comfy/ldm/cosmos/predict2.py
  • comfy/model_base.py

Comment thread comfy/model_base.py
Copy link
Copy Markdown
Member

@Kosinkadink Kosinkadink left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested this PR, works. One thing I also tested was using the ref latents + lora for only half the generation steps or less by using to KSampler (Advanced) nodes, and that vastly improved the quality of the results when not trying to strictly adhere to each individual canny line without a need for a second pass.

@levzzz5154
Copy link
Copy Markdown
Author

Indeed, using it for a select step range often provides better results, perhaps due to the training dataset and LoRA made for an older preview version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants