Z:\> accelerate launch train_lora_dreambooth.py --pretrained_model_name_or_path="stabilityai/stable-diffusion-2-1-base" --instance_data_dir="data_example" --output_dir="output_example" --instance_prompt="sks patricia" --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=1 --learning_rate=1e-4 --lr_scheduler="constant" --lr_warmup_steps=0 --max_train_steps=30000 The following values were not passed to `accelerate launch` and had defaults used instead: `--num_processes` was set to a value of `1` `--num_machines` was set to a value of `1` `--mixed_precision` was set to a value of `'no'` `--dynamo_backend` was set to a value of `'no'` To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\accelerate\accelerator.py:321: UserWarning: `log_with=tensorboard` was passed but no supported trackers are currently installed. warnings.warn(f"`log_with={log_with}` was passed but no supported trackers are currently installed.") Before training: Unet First Layer lora up tensor([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.], ..., [0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) Before training: Unet First Layer lora down tensor([[ 0.0193, -0.0875, -0.0645, ..., 0.0632, 0.0634, -0.1279], [ 0.0021, -0.0696, -0.0746, ..., -0.0252, -0.0373, -0.0152], [-0.0187, -0.0599, 0.0579, ..., -0.0196, 0.1353, -0.0437], [ 0.0956, -0.0547, 0.0707, ..., -0.0133, -0.0665, 0.0195]]) c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\diffusers\configuration_utils.py:195: FutureWarning: It is deprecated to pass a pretrained model name or path to `from_config`.If you were trying to load a scheduler, please use .from_pretrained(...) instead. Otherwise, please make sure to pass a configuration dictionary instead. This functionality will be removed in v1.0.0. deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False) ***** Running training ***** Num examples = 8 Num batches each epoch = 8 Num Epochs = 3750 Instantaneous batch size per device = 1 Total train batch size (w. parallel, distributed & accumulation) = 1 Gradient Accumulation steps = 1 Total optimization steps = 30000 Steps: 0%| | 0/30000 [00:00 main(args) File "D:\2.35\dev\3.10\lora2\train_lora_dreambooth.py", line 791, in main for step, batch in enumerate(train_dataloader): File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\accelerate\data_loader.py", line 372, in __iter__ dataloader_iter = super().__iter__() File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\torch\utils\data\dataloader.py", line 435, in __iter__ return self._get_iterator() File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\torch\utils\data\dataloader.py", line 381, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\torch\utils\data\dataloader.py", line 1034, in __init__ Traceback (most recent call last): File "", line 1, in w.start() File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\process.py", line 121, in start File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) self._popen = self._Popen(self) File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\spawn.py", line 126, in _main File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\context.py", line 224, in _Popen self = reduction.pickle.load(from_parent) EOFError: Ran out of input return _default_context.get_context().Process._Popen(process_obj) File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "c:\users\jensf\appdata\local\programs\python\python39\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'DreamBoothDataset.__init__..' Steps: 0%| | 0/30000 [00:03 File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main args.func(args) File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command simple_launcher(args) File "c:\users\jensf\appdata\local\programs\python\python39\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['c:\\users\\jensf\\appdata\\local\\programs\\python\\python39\\python.exe', 'train_lora_dreambooth.py', '--pretrained_model_name_or_path=stabilityai/stable-diffusion-2-1-base', '--instance_data_dir=data_example', '--output_dir=output_example', '--instance_prompt=sks patricia', '--resolution=512', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--learning_rate=1e-4', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=30000']' returned non-zero exit status 1.