Feature: flux2 klein lora support#8862
Conversation
Initial implementation for loading and applying LoRA models trained with BFL's PEFT format for FLUX.2 Klein transformers. Changes: - Add LoRA_Diffusers_Flux2_Config and LoRA_LyCORIS_Flux2_Config - Add BflPeft format to FluxLoRAFormat taxonomy - Add flux_bfl_peft_lora_conversion_utils for weight conversion - Add Flux2KleinLoraLoaderInvocation node Status: Work in progress - not yet fully tested Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add BFL PEFT LoRA support for FLUX.2 Klein, including runtime conversion of BFL-format keys to diffusers format with fused QKV splitting, improved detection of Klein 4B LoRAs via MLP ratio check, and frontend graph wiring.
…ility Auto-detect FLUX.2 Klein LoRA variant from tensor dimensions during model probe, warn on variant mismatch at load time, and filter the LoRA picker to only show variant-compatible LoRAs.
|
I added the variant Fields for Loras too and maybe we could use that for SDXL Subtypes too. |
|
It isn't clear to me how to find BFL PEFT or diffusers LoRAs for Flux.2. Everything I found was a In the meantime, here are some popular Flux.2 LoRAs I tried:
|
|
I take a Look at the ones that are not working. |
|
Good job on this, was trying to get bfs_head_v1_flux-2-klein_9b Lora working in 6.11 and just wasn't having anything, will wait for 6.12 😁 |
Three Flux.2 Klein LoRAs were either unrecognized or misclassified due to format detection gaps: 1. PEFT-wrapped BFL format (base_model.model.* prefix) was not recognized because the detector only accepted the diffusion_model.* prefix. 2. Klein 4B LoRAs with hidden_size=3072 were misidentified as Flux.1 due to a break statement exiting the detection loop before txt_in/vector_in dimensions could be checked. 3. Flux2 native diffusers format (to_qkv_mlp_proj, ff.linear_in) was not detected because the detector only checked for Flux.1 diffusers keys. Also handles mixed PEFT/standard LoRA suffix formats within the same file.
|
The Spritesheet is working. However, the Spritesheet comfy converted version is still unrecognized. Consistence Edit LoRA is now recognized and works. I think this covers enough of the use cases for now and will Accept it. |
Summary
Add full LoRA support for FLUX.2 Klein models (4B and 9B variants), including automatic detection, loading, key conversion, and UI integration.
Model Detection & Configuration
context_embedder,vector_in, hidden size, MLP ratio) to distinguish them from FLUX.1 LoRAsvariantfield on the configLoRA_LyCORIS_Flux2_ConfigandLoRA_Diffusers_Flux2_Configmodel configsFluxLoRAFormat.BflPeftas a new recognized LoRA formatConfig_Base.get_tag()to handleNonedefaults (needed for the optionalvariantfield)BFL PEFT LoRA Format Support
flux_bfl_peft_lora_conversion_utils.pywith full BFL PEFT format detection and conversionconvert_bfl_lora_patch_to_diffusers()for LoRAs loaded with FLUX.1 format but applied to FLUX.2 modelsLoRA Loading & Application
Flux2KleinLoRALoaderInvocationandFlux2KleinLoRACollectionLoaderinvocations for applying LoRAs to Klein's transformer and Qwen3 text encoderFrontend
addFlux2KleinLoRAs.tsgraph builder for wiring LoRAs into the Klein generation graphRelated Issues / Discussions
QA Instructions
flux2base with the correct variant (4B/9B) in the Model ManagerMerge Plan
Checklist
What's Newcopy (if doing a release after this PR)