Describe the bug
When I try to use accelerate launch train_instruct_pix2pix.py with one gpus, it report the error as below:
File "/home/xiangpeng.wan/miniconda3/envs/transformers/lib/python3.8/site-packages/accelerate/utils/dataclasses.py", line 836, in set_auto_wrap_policy
raise Exception("Could not find the transformer layer class to wrap in the model.")
File "train_instruct_pix2pix.py", line 706, in main
unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare
Exception: Could not find the transformer layer class to wrap in the model.
I did the accelerate config default
Reproduction
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export DATASET_ID="fusing/instructpix2pix-1000-samples"
accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py
--pretrained_model_name_or_path=$MODEL_NAME
--dataset_name=$DATASET_ID
--enable_xformers_memory_efficient_attention
--resolution=256 --random_flip
--train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing
--max_train_steps=15000
--checkpointing_steps=5000 --checkpoints_total_limit=1
--learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0
--conditioning_dropout_prob=0.05
--mixed_precision=fp16
--seed=42
Logs
No response
System Info
diffusers-0.15.0.dev0
python=3.8
torch=2.0.0
accelerate=0.18.0
ubuntu 20.04