-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Description
Describe the bug
Hello HF team, @sayakpaul @bghira
I'm encountering a persistent issue when trying to fine-tune the black-forest-labs/FLUX.1-Kontext-dev model using the train_dreambooth_lora_flux_kontext.py script.
I am following the official README instructions for Image-to-Image (I2I) finetuning. My goal is to train a transformation on my own dataset, which is structured for I2I (condition image, target image, and text instruction).
The Problem
Every time I run the script with the correct arguments for I2I finetuning, I get a : the following arguments are required: --instance_prompt
When I run this [Reproduction], I receive the error: the following arguments are required: --instance_prompt.
To isolate the issue from my personal dataset, I also tested the exact example command provided in the documentation (the one using kontext-community/relighting). I found that this command also fails with the identical the following arguments are required: --instance_prompt error.
Given that both my custom command and the official example command are failing in the same way, I am trying to understand the origin of this error. It seems the --instance_prompt argument is being required even when all I2I-specific arguments are provided.
Environment
Script: examples/dreambooth/train_dreambooth_lora_flux_kontext.py
Diffusers Version: I am using the specific commit 05e7a854d0a5661f5b433f6dd5954c224b104f0b (installed via pip install -e . from a clone), as recommended in the README.
Could you please help me understand why this might be happening? Is this expected behavior, or am I perhaps missing a configuration step?
Thank you for your time!
Reproduction
How to Reproduce
I am running the following command, which provides all the necessary arguments for I2I finetuning using my (dataset_name, image_column, cond_image_column, and caption_column) using my public dataset:
accelerate launch /local-git-path/train_dreambooth_lora_flux_kontext.py \
--pretrained_model_name_or_path="black-forest-labs/FLUX.1-Kontext-dev" \
--output_dir="/local-path/kontext-finetuning-v1" \
--dataset_name="MichaelMelgarejoTotto/mi-dataset-kontext" \
--image_column="output" \
--cond_image_column="file_name" \
--caption_column="instruccion" \
--mixed_precision="bf16" \
--resolution=1024 \
--train_batch_size=1 \
--guidance_scale=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--optimizer="adamw" \
--use_8bit_adam \
--cache_latents \
--learning_rate=1e-4 \
--lr_scheduler="constant" \
--lr_warmup_steps=200 \
--max_train_steps=1000 \
--rank=16 \
--seed="0"
Logs
train_dreambooth_lora_flux_kontext.py: error: the following arguments are required: --instance_promptSystem Info
- 🤗 Diffusers version: 0.35.0.dev0
- Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28
- Running on Google Colab?: No
- Python version: 3.10.19
- PyTorch version (GPU?): 2.7.1+cu118 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.36.0
- Transformers version: 4.57.1
- Accelerate version: 1.11.0
- PEFT version: 0.17.1
- Bitsandbytes version: 0.48.1
- Safetensors version: 0.6.2
- xFormers version: not installed
- Accelerator: NA
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
No response