Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which model to load when using stable-diffusion configs? #147

Open
wonjunior opened this issue Apr 5, 2023 · 5 comments
Open

Which model to load when using stable-diffusion configs? #147

wonjunior opened this issue Apr 5, 2023 · 5 comments

Comments

@wonjunior
Copy link

Which model model.ckpt should be used when using the stable diffusion configuration file (configs/stable-diffusion/v1-inference.yaml) to optimize a concept?

@kaneyxx
Copy link

kaneyxx commented Apr 15, 2023

Which model model.ckpt should be used when using the stable diffusion configuration file (configs/stable-diffusion/v1-inference.yaml) to optimize a concept?

Have you tried it? I have the same question.

@wonjunior
Copy link
Author

This seems to be solved here. Loading the original CompVis Stable Diffusion model weights along with the inference config configs/stable-diffusion/v1-inference.yaml seems to be sufficient to perform an inversion with stable diffusion. I have yet to try that.

@kaneyxx
Copy link

kaneyxx commented Apr 19, 2023

Thanks for information! @wonjunior I'd like to try it this weekend.

@kaneyxx
Copy link

kaneyxx commented Apr 20, 2023

@wonjunior I've tried it yesterday with that stable diffusion original v1.4 checkpoint downloaded from huggingface. Training process is fine, but there are some error happened during evaluation.

Saving latest checkpoint...

Traceback (most recent call last):
  File "/data/fangyi/textual_inversion/main.py", line 808, in <module>
    trainer.test(model, data)
  File "/home/fangyi/miniconda3/envs/torch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 911, in test
    return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule)
  File "/home/fangyi/miniconda3/envs/torch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/fangyi/miniconda3/envs/torch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 954, in _test_impl
    results = self._run(model, ckpt_path=self.tested_ckpt_path)
  File "/home/fangyi/miniconda3/envs/torch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1128, in _run
    verify_loop_configurations(self)
  File "/home/fangyi/miniconda3/envs/torch/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py", line 42, in verify_loop_configurations
    __verify_eval_loop_configuration(trainer, model, "test")
  File "/home/fangyi/miniconda3/envs/torch/lib/python3.10/site-packages/pytorch_lightning/trainer/configuration_validator.py", line 186, in __verify_eval_loop_configuration
    raise MisconfigurationException(f"No `{loader_name}()` method defined to run `Trainer.{trainer_method}`.")
pytorch_lightning.utilities.exceptions.MisconfigurationException: No `test_dataloader()` method defined to run `Trainer.test`.

Looks something wrong in pytorch_lightning, any idea?

@rinongal
Copy link
Owner

For the stable diffusion version you need any of the compvis-based models, so 1.4 and 1.5 will both work fine, as will models tuned from those base models (e.g. protogen).

@kaneyxx There's just no test-set defined for the model so it crashes when training is done. If you want to avoid that crash you can look at the training config data portion and copy the "train:" block into a "test:" block.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants