-
Notifications
You must be signed in to change notification settings - Fork 966
Description
报错如下:
accelerate launch examples/qwen_image/model_training/train.py
--dataset_base_path /home/ai/train_lora/joycaption/wukong
--dataset_metadata_path /home/ai/train_lora/joycaption/wukong/image_info.csv
--max_pixels 1048576
--dataset_repeat 50
--model_paths '["/home/ai/ai_image/qwen-image/transformer","/home/ai/ai_image/qwen-image/text_encoder","/home/ai/ai_image/qwen-image/vae"]'
--learning_rate 1e-4
--num_epochs 5
--remove_prefix_in_ckpt "pipe.dit."
--output_path "./models/train/Qwen-Image_lora"
--lora_base_model "dit"
--lora_target_modules "to_q,to_k,to_v,add_q_proj,add_k_proj,add_v_proj,to_out.0,to_add_out,img_mlp.net.2,img_mod.1,txt_mlp.net.2,txt_mod.1"
--lora_rank 32
--align_to_opensource_format
--use_gradient_checkpointing
--dataset_num_workers 8
--find_unused_parameters
The following values were not passed to accelerate launch
and had defaults used instead:
--num_processes
was set to a value of 1
--num_machines
was set to a value of 1
--mixed_precision
was set to a value of 'no'
--dynamo_backend
was set to a value of 'no'
To avoid this warning pass in values for each of the problematic parameters or run accelerate config
.
Height and width are none. Setting dynamic_resolution
to True.
Loading models from: /home/ai/ai_image/qwen-image/transformer
Traceback (most recent call last):
File "/home/ai/train_lora/DiffSynth-Studio/examples/qwen_image/model_training/train.py", line 97, in
model = QwenImageTrainingModule(
File "/home/ai/train_lora/DiffSynth-Studio/examples/qwen_image/model_training/train.py", line 32, in init
self.pipe = QwenImagePipeline.from_pretrained(torch_dtype=torch.bfloat16, device="cpu", model_configs=model_configs)
File "/home/ai/.conda/envs/DiffSynth-Studio/lib/python3.10/site-packages/diffsynth/pipelines/qwen_image.py", line 157, in from_pretrained
model_manager.load_model(
File "/home/ai/.conda/envs/DiffSynth-Studio/lib/python3.10/site-packages/diffsynth/models/model_manager.py", line 409, in load_model
model_names, models = model_detector.load(
File "/home/ai/.conda/envs/DiffSynth-Studio/lib/python3.10/site-packages/diffsynth/models/model_manager.py", line 266, in load
huggingface_lib, model_name, redirected_architecture = self.architecture_dict[architecture]
KeyError: 'QwenImageTransformer2DModel'
Traceback (most recent call last):
File "/home/ai/.conda/envs/DiffSynth-Studio/bin/accelerate", line 8, in
sys.exit(main())
File "/home/ai/.conda/envs/DiffSynth-Studio/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 50, in main
args.func(args)
File "/home/ai/.conda/envs/DiffSynth-Studio/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1213, in launch_command
simple_launcher(args)
File "/home/ai/.conda/envs/DiffSynth-Studio/lib/python3.10/site-packages/accelerate/commands/launch.py", line 795, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/ai/.conda/envs/DiffSynth-Studio/bin/python', 'examples/qwen_image/model_training/train.py', '--dataset_base_path', '/home/ai/train_lora/joycaption/wukong', '--dataset_metadata_path', '/home/ai/train_lora/joycaption/wukong/image_info.csv', '--max_pixels', '1048576', '--dataset_repeat', '50', '--model_paths', '["/home/ai/ai_image/qwen-image/transformer","/home/ai/ai_image/qwen-image/text_encoder","/home/ai/ai_image/qwen-image/vae"]', '--learning_rate', '1e-4', '--num_epochs', '5', '--remove_prefix_in_ckpt', 'pipe.dit.', '--output_path', './models/train/Qwen-Image_lora', '--lora_base_model', 'dit', '--lora_target_modules', 'to_q,to_k,to_v,add_q_proj,add_k_proj,add_v_proj,to_out.0,to_add_out,img_mlp.net.2,img_mod.1,txt_mlp.net.2,txt_mod.1', '--lora_rank', '32', '--align_to_opensource_format', '--use_gradient_checkpointing', '--dataset_num_workers', '8', '--find_unused_parameters']' returned non-zero exit status 1.