Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Installing Models when user skipped all during install #1420

Closed
1 task done
enzyme69 opened this issue Nov 9, 2022 · 4 comments
Closed
1 task done

[bug]: Installing Models when user skipped all during install #1420

enzyme69 opened this issue Nov 9, 2022 · 4 comments
Labels
bug Something isn't working

Comments

@enzyme69
Copy link

enzyme69 commented Nov 9, 2022

Is there an existing issue for this?

  • I have searched the existing issues

OS

macOS

GPU

mps

VRAM

8 GB

What happened?

On a Mac, during install on a new computer, I skipped all the models and ended up with empty model .yaml. I could not find a way to install model easily. I thought that I could just run the CLI and then import model from there (with no model loaded). Apparently that's not possible and kept getting error, it won't run.

 Current VRAM usage:  0.00G
** "stable-diffusion-1.5" is not a known model name. Please check your models.yaml file
** "stable-diffusion-1.5" is not a known model name. Please check your models.yaml file
╭─────────────────────────── Traceback (most recent call last) ────────────────────────────╮
│ /Users/blendersushi/Documents/InvokeAI/scripts/invoke.py:884 in <module>                 │
│                                                                                          │
│   881 ######################################                                             │
│   882                                                                                    │
│   883 if __name__ == '__main__':                                                         │
│ ❱ 884 │   main()                                                                         │
│   885                                                                                    │
│                                                                                          │
│ /Users/blendersushi/Documents/InvokeAI/scripts/invoke.py:101 in main                     │
│                                                                                          │
│    98 │   │   print(">> changed to seamless tiling mode")                                │
│    99 │                                                                                  │
│   100 │   # preload the model                                                            │
│ ❱ 101 │   gen.load_model()                                                               │
│   102 │                                                                                  │
│   103 │   # web server loops forever                                                     │
│   104 │   if opt.web or opt.gui:                                                         │
│                                                                                          │
│ /Users/blendersushi/Documents/InvokeAI/ldm/generate.py:783 in load_model                 │
│                                                                                          │
│    780 │   │   '''                                                                       │
│    781 │   │   preload model identified in self.model_name                               │
│    782 │   │   '''                                                                       │
│ ❱  783 │   │   self.set_model(self.model_name)                                           │
│    784 │                                                                                 │
│    785 │   def set_model(self,model_name):                                               │
│    786 │   │   """                                                                       │
│                                                                                          │
│ /Users/blendersushi/Documents/InvokeAI/ldm/generate.py:808 in set_model                  │
│                                                                                          │
│    805 │   │   if model_data is None:  # restore previous                                │
│    806 │   │   │   model_data = cache.get_model(self.model_name)                         │
│    807 │   │                                                                             │
│ ❱  808 │   │   self.model = model_data['model']                                          │
│    809 │   │   self.width = model_data['width']                                          │
│    810 │   │   self.height= model_data['height']                                         │
│    811 │   │   self.model_hash = model_data['hash']    

Now, trying to install the model manually...

scripts/preload_models.py

**Error creating config file ./configs/models.yaml: [Errno 2] No such file or directory: './configs/new_config.tmp' **

Screenshots

No response

Additional context

No response

Contact Details

No response

@enzyme69 enzyme69 added the bug Something isn't working label Nov 9, 2022
@enzyme69
Copy link
Author

enzyme69 commented Nov 9, 2022

The only to fix this "skipped model install" is to manually type the path into config/model.yaml

Would be nice instead of erroring when running: it will tell user: hey you don't have any stable diffusion model ckpt, would you like to import one and set it up for you

redshift:
  weights: /Users/blendersushi/Downloads/_StableDiffusion_CKPT/redshift-diffusion-v1.ckpt
  description: redshift render like cinema4D style
  config: configs/stable-diffusion/v1-inference.yaml
  width: 512
  height: 512
  default: false
cats_musical:
  weights: /Users/blendersushi/Downloads/_StableDiffusion_CKPT/Cats-Musical-Style-ctsmscl.ckpt
  description: 'cats musical '
  config: configs/stable-diffusion/v1-inference.yaml
  width: 512
  height: 512
  default: false

@lstein
Copy link
Collaborator

lstein commented Nov 10, 2022

I retracted my previous. This is a bug in preload_models.py. It shouldn't crash out on you.

@lstein
Copy link
Collaborator

lstein commented Nov 10, 2022

I can't reproduce the problem of preload_models.py not being runnable. Is there any chance that your configs directory cannot be written?

I'll see if I can fix invoke.py to let you install a model if there isn't one already configured.

lstein added a commit that referenced this issue Nov 10, 2022
- Script will now offer the user the ability to create a
  minimal models.yaml and then gracefully exit.
- Closes #1420
@lstein
Copy link
Collaborator

lstein commented Nov 10, 2022

I've just created a pull request that will fix this issue. If no config.yaml file is present, invoke.py will offer to create one for you using a provided models file. It's a little rough and dirty (did it in a rush), but better than the current crash.

lstein added a commit that referenced this issue Nov 11, 2022
- Script will now offer the user the ability to create a
  minimal models.yaml and then gracefully exit.
- Closes #1420
lstein added a commit that referenced this issue Nov 11, 2022
- Script will now offer the user the ability to create a
  minimal models.yaml and then gracefully exit.
- Closes #1420
@lstein lstein closed this as completed in 1a8e007 Nov 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants