feat: Add reference latent support for Anima#13392
feat: Add reference latent support for Anima#13392levzzz5154 wants to merge 6 commits intoComfy-Org:masterfrom
Conversation
…atents-anima-pr # Conflicts: # comfy/ldm/cosmos/predict2.py # comfy/model_base.py
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe PR adds reference-latent handling in two places. In 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy/model_base.py`:
- Around line 1225-1230: The new handling of ref_latents (produced in
ref_latents -> out['ref_latents'] via self.process_latent_in) is not being
accounted for in memory estimation; update the model's memory accounting by
registering ref_latents in memory_usage_factor_conds and reporting their shapes
in extra_conds_shapes() so memory_required() includes them; specifically, add an
entry for the same key/name used when creating out['ref_latents'] into
memory_usage_factor_conds with an appropriate scaling factor and ensure
extra_conds_shapes() returns the tensor shape(s) produced by process_latent_in
for ref_latents (matching how other reference-latent models are handled), so
MiniTrainDIT._forward()’s concatenation onto x is reflected in VRAM estimates.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5674b5de-8293-43a2-8672-19dfdd0b4fb5
📒 Files selected for processing (2)
comfy/ldm/cosmos/predict2.pycomfy/model_base.py
There was a problem hiding this comment.
Tested this PR, works. One thing I also tested was using the ref latents + lora for only half the generation steps or less by using to KSampler (Advanced) nodes, and that vastly improved the quality of the results when not trying to strictly adhere to each individual canny line without a need for a second pass.
|
Indeed, using it for a select step range often provides better results, perhaps due to the training dataset and LoRA made for an older preview version. |
Flux 2 style ReferenceLatent implementation for Anima. Needed to use editing-capable LoRAs or finetunes of Anima in ComfyUI.
As an example, a canny control LoRA: https://civitai.com/models/2443202/anima-canny-control-lora-controlnet-like
Example image:

Example workflow: Anima-RefLatent.json
Training code used to produce the example LoRA: https://github.com/levzzz5154/diffusion-pipe