Skip to content

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Feb 13, 2023

Major Changes

The invokeai-configure script has now been refactored. The work of selecting and downloading initial models at install time is now done by a script named invokeai-model-install (module name is ldm.invoke.config.model_install)

Screen 1 - adjust startup options:
screenshot1

Screen 2 - select SD models:
screenshot2

The calling arguments for invokeai-configure have not changed, so nothing should break. After initializing the root directory, the script calls invokeai-model-install to let the user select the starting models to install.

invokeai-model-install puts up a console GUI with checkboxes to indicate which models to install. It respects the --default_onlyand--yes` arguments so that CI will continue to work. Here are the various effects you can achieve:

invokeai-configure
This will use console-based UI to initialize invokeai.init,
download support models, and choose and download SD models

invokeai-configure --yes
Without activating the GUI, populate invokeai.init with default values,
download support models and download the "recommended" SD models

invokeai-configure --default_only
Activate the GUI for changing init options, but don't show the SD download
form, and automatically download the default SD model (currently SD-1.5)

invokeai-model-install
Select and install models. This can be used to download arbitrary
models from the Internet, install HuggingFace models using their repo_id,
or watch a directory for models to load at startup time

invokeai-model-install --yes
Import the recommended SD models without a GUI

invokeai-model-install --default_only
As above, but only import the default model

Flexible Model Imports

The console GUI allows the user to import arbitrary models into InvokeAI using:

  1. A HuggingFace Repo_id
  2. A URL (http/https/ftp) that points to a checkpoint or safetensors file
  3. A local path on disk pointing to a checkpoint/safetensors file or diffusers directory
  4. A directory to be scanned for all checkpoint/safetensors files to be imported

The UI allows the user to specify multiple models to bulk import. The user can specify whether to import the ckpt/safetensors as-is, or convert to diffusers. The user can also designate a directory to be scanned at startup time for checkpoint/safetensors files.

Backend Changes

To support the model selection GUI PR introduces a new method in ldm.invoke.model_manager called `heuristic_import(). This accepts a string-like object which can be a repo_id, URL, local path or directory. It will figure out what the object is and import it. It interrogates the contents of checkpoint and safetensors files to determine what type of SD model they are -- v1.x, v2.x or v1.x inpainting.

Installer

I am attaching a zip file of the installer if you would like to try the process from end to end.
InvokeAI-installer-v2.3.0.zip

…ements

1. The invokeai-configure script has now been refactored. The work of
   selecting and downloading initial models at install time is now done
   by a script named invokeai-initial-models (module
   name is ldm.invoke.config.initial_model_select)

   The calling arguments for invokeai-configure have not changed, so
   nothing should break. After initializing the root directory, the
   script calls invokeai-initial-models to let the user select the
   starting models to install.

2. invokeai-initial-models puts up a console GUI with checkboxes to
   indicate which models to install. It respects the --default_only
   and --yes arguments so that CI will continue to work.

3. User can now edit the VAE assigned to diffusers models in the CLI.

4. Fixed a bug that caused a crash during model loading when the VAE
   is set to None, rather than being empty.
@lstein
Copy link
Collaborator Author

lstein commented Feb 14, 2023

I've decided to turn this into a full-featured interface for importing models from the Internet. I'm turning it into a draft while working out the bugs. Currently it is nonfunctional.

@lstein lstein marked this pull request as draft February 14, 2023 05:03
…po_ids

- Ability to scan directory not yet implemented
- Can't download from Civitai due to incomplete URL download implementation
- quashed multiple bugs in model conversion and importing
- found old issue in handling of resume of interrupted downloads
- will require extensive testing
@lstein lstein marked this pull request as ready for review February 16, 2023 08:23
Copy link
Contributor

@mauwii mauwii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

- Corrected error that caused --full-precision argument to be ignored
  when models downloaded using the --yes argument.

- Improved autodetection of v1 inpainting files; no longer relies on the
  file having 'inpaint' in the name.
@psychedelicious
Copy link
Collaborator

Sorry for the late review. I had tried to review a few days ago but I couldn't get it to run; I think you were in the middle of changes.

First impression - a thing of beauty!

Running the script, I had a few hiccups:

  • Before running the script, I had dreamlike-diffusion-1.0 and dreamlike-photoreal-2.0 downloaded but not in my models.yaml. I had ft-mse-improved-autoencoder-840000 downloaded and present in my models.yaml.

None of these were picked up as installed - I suppose the script looks at models.yaml to see what is installed.

After selecting dreamlike-diffusion-1.0 and autoencoder-840000, the script detected they were downloaded correctly and added them to my models.yaml. However, I now have two nearly identical entries for the VAE: ft-mse-improved-autoencoder-840000 and now autoencoder-840000, pointing to the same ckpt. Only difference is the name.

I did not select dreamlike-photoreal-2.0 to be installed, and it did not show up in my models.yaml after the script. I would expect the script to pick up everything previously installed by InvokeAI.

  • At the end of the first run, my invokeai.init file had not changed. After the second and subsequent runs, it had changed to match my selections...
  • ...almost - no matter what I select for convert to diffusers, nothing changes in invokeai.init
  • Finally, the setting the NSFW box enabled or disabled does change invokeai.init, but it always defaults to enabled when running the script, even if it is disabled in invokeai.init

This is a very nice user experience for a shell script, great work! Love it.

@lstein
Copy link
Collaborator Author

lstein commented Feb 21, 2023

trying this out now. i expect the == CONVERT IMPORTED MODELS INTO DIFFUSERS == toggle will confuse people - is it possible for it to simply not appear if i'm not importing anything?

also, can the == IMPORT LOCAL AND REMOTE MODELS == text entry box be smaller? it really only needs to be 1 line i think..

Changes implemented:

  1. The convert toggle is now conditional and only appears when there are models to import or when the "autoscan" directory is set.
  2. I have made the text entry box smaller. However, I want to make it possible for people to download multiple models at a time, and so I made it 5 lines high rather than 1 line as suggested

@lstein
Copy link
Collaborator Author

lstein commented Feb 21, 2023

Sorry for the late review. I had tried to review a few days ago but I couldn't get it to run; I think you were in the middle of changes.

First impression - a thing of beauty!

Running the script, I had a few hiccups:

  • Before running the script, I had dreamlike-diffusion-1.0 and dreamlike-photoreal-2.0 downloaded but not in my models.yaml. I had ft-mse-improved-autoencoder-840000 downloaded and present in my models.yaml.

None of these were picked up as installed - I suppose the script looks at models.yaml to see what is installed.

The script only works off of what's in models.yaml because in fact you can have models installed anywhere in your filesystem. I'll fix it in the future so that the invokeai/models directory is scanned if that's the desired behavior.

After selecting dreamlike-diffusion-1.0 and autoencoder-840000, the script detected they were downloaded correctly and added them to my models.yaml. However, I now have two nearly identical entries for the VAE: ft-mse-improved-autoencoder-840000 and now autoencoder-840000, pointing to the same ckpt. Only difference is the name.

The autoencoder is no longer an option to install. Both the legacy and the diffusers versions are now installed behind the scenes for use in later model importation.

I did not select dreamlike-photoreal-2.0 to be installed, and it did not show up in my models.yaml after the script. I would expect the script to pick up everything previously installed by InvokeAI.

See above. The question is what to do when the user has a models directory that contains model files that are not in models.yaml. Do they want to import these back into models.yaml, or was there a reason they removed them in the first place.

  • At the end of the first run, my invokeai.init file had not changed. After the second and subsequent runs, it had changed to match my selections...
  • ...almost - no matter what I select for convert to diffusers, nothing changes in invokeai.init
  • Finally, the setting the NSFW box enabled or disabled does change invokeai.init, but it always defaults to enabled when running the script, even if it is disabled in invokeai.init

You found a bug. Fixed.

This is a very nice user experience for a shell script, great work! Love it.

Very high praise coming from you! Thanks.

@lstein
Copy link
Collaborator Author

lstein commented Feb 21, 2023

oh, i see i'm supposed to run invokeai-configure rather than invokeai-model-download. hmm.

  1. on a fresh vast.ai box if i run invokeai-configure, deselect all the default models and only enter a huggingface repo id in the text box, i end up with an empty models.yaml.
  2. even with everything deselected, a lot of stuff gets downloaded, including a 1.7GB CLIP model that i don't need because it comes packaged with the diffusers repo id, and a bunch of face gen stuff that i have never used and likely will never use. is there any way to be smarter about this and/or make it an option?

I have fixed the issue of empty models.yaml files when no recommended starter files are selected. Thanks for detecting it.

Enhancements:
1. Directory-based imports will not attempt to import components of diffusers models.
2. Diffuser directory imports now supported
3. Files that end with .ckpt that are not Stable Diffusion models (such as VAEs) are
   skipped during import.

Bugs identified in Psychedelicious's review:
1. The invokeai-configure form now tracks the current contents of `invokeai.init` correctly.
2. The autoencoders are no longer treated like installable models, but instead are
   mandatory support models. They will no longer appear in `models.yaml`

Bugs identified in Damian's review:
1. If invokeai-model-install is started before the root directory is initialized, it will
   call invokeai-configure to fix the matter.
2. Fix bug that was causing empty `models.yaml` under certain conditions.
3. Made import textbox smaller
4. Hide the "convert to diffusers" options if nothing to import.
1. Fixed display crash when the number of installed models is less than
   the number of desired columns to display them.

2. Added --ckpt_convert option to init file.
@lstein lstein requested a review from damian0815 February 21, 2023 19:14
@lstein
Copy link
Collaborator Author

lstein commented Feb 21, 2023

@psychedelicious @damian0815 Thank you for your reviews. I have addressed your concerns and would welcome another round of reviews.

@lstein
Copy link
Collaborator Author

lstein commented Feb 21, 2023

above was fixable with mkdir -p /root/invokeai/configs && touch /root/invokeai/configs/models.yaml.

This should no longer be necessary. If the model installer finds that there is no root configs or models directory, it will launch the invokeai-configure command in order to set up the root directory properly. The model installer will then pick up where it left off.

i deselected all the default models, entered a HF repo id in the text entry box and hit DONE and it exited immediately. and /root/invokeai/configs/models.yaml is empty. that's not good.

This bug is fixed.

Copy link
Contributor

@damian0815 damian0815 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Resizing works, but did crash on me. I ran the script having enlarged my terminal window. Then shrunk it down a bit and it crashed due to not enough window space. If disabling the responsive will prevent crashes, I think that's better than introducing a failure condition. If resizing too small was already a failure condition then responsive is fine.
  • NSFW option is detected ✔️
  • Unfortunately there is a new bug, appears to be related to the responsive feature. Often (but not always), the last few settings never appear and I just get a blank UI. I didn't catch it on the video, but in this state, when resizing my terminal, the missing settings briefly flash on the screen as the UI resizes. Furthermore, when this is occurring, resizing can cause a different error:
A problem occurred during initialization.
The error was: "list index out of range"
Traceback (most recent call last):
  File "/home/bat/Documents/Code/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 828, in main
    init_options, models_to_download = run_console_ui(opt, init_file)
  File "/home/bat/Documents/Code/InvokeAI/ldm/invoke/config/invokeai_configure.py", line 683, in run_console_ui
    editApp.run()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/apNPSApplication.py", line 30, in run
    return npyssafewrapper.wrapper(self.__remove_argument_call_main)
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/npyssafewrapper.py", line 41, in wrapper
    wrapper_no_fork(call_function)
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/npyssafewrapper.py", line 97, in wrapper_no_fork
    return_code = call_function(_SCREEN)
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/apNPSApplication.py", line 25, in __remove_argument_call_main
    return self.main()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/apNPSApplicationManaged.py", line 172, in main
    self._THISFORM.edit()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/fm_form_edit_loop.py", line 47, in edit
    self.edit_loop()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/fm_form_edit_loop.py", line 38, in edit_loop
    self._widgets__[self.editw].edit()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 458, in edit
    self._edit_loop()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 474, in _edit_loop
    self.get_and_use_key_press()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 610, in get_and_use_key_press
    self.handle_input(ch)
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 95, in handle_input
    if self.parent.handle_input(_input):
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 71, in handle_input
    self.handlers[_input](_input)
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/fmFormMultiPage.py", line 34, in _resize
    w._resize()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 326, in _resize
    self.resize()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgtitlefield.py", line 82, in resize
    self.entry_widget._resize()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 326, in _resize
    self.resize()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgmultiline.py", line 105, in resize
    self.display()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgwidget.py", line 429, in display
    self.update()
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgselectone.py", line 18, in update
    super(SelectOne, self).update(clear=clear)
  File "/home/bat/invokeai/.venv/lib/python3.10/site-packages/npyscreen/wgmultiline.py", line 220, in update
    line = self._my_widgets[-1]
IndexError: list index out of range

Video of missing settings:
Screencast from 22-02-23 18:55:21.webm

If the cause isn't clear, maybe we should skip the resizing for this release, in which case I am happy to approve this.

@psychedelicious
Copy link
Collaborator

Also, somehow while testing this I ended up with a wonky invokeai.init file:

# InvokeAI initialization file
# This is the InvokeAI initialization file, which contains command-line default values.
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
# or renaming it and then running invokeai-configure again.
# the --outdir option controls the default location of image files.
# generation arguments
# You may place other  frequently-used startup commands here, one or more per line.
# Examples:
# --web --host=0.0.0.0
# --steps=20
# -Ak_euler_a -C10.0
#
--no-ckpt_convert
--no-ckpt_convert
--no-ckpt_convert
--no-ckpt_convert
--no-ckpt_convert
--no-ckpt_convert

--outdir=/home/bat/invokeai/outputs
--embedding_path=/home/bat/invokeai/embeddings
--precision=auto
--max_loaded_models=2
--no-nsfw_checker
--xformers
--no-ckpt_convert

- Disable responsive resizing below starting dimensions (you can make
  form larger, but not smaller than what it was at startup)

- Fix bug that caused multiple --ckpt_convert entries (and similar) to
  be written to init file.
- The configure script was misnaming the directory for text-inversion-output.
- Now fixed.
@lstein lstein enabled auto-merge February 22, 2023 19:28
Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants