Skip to content

InvokeAI Version 2.3.3 - A Stable Diffusion Toolkit

Compare
Choose a tag to compare
@lstein lstein released this 28 Mar 04:50
· 7876 commits to main since this release
fd74f51

We are pleased to announce a bugfix update to InvokeAI with the release of version 2.3.3.

What's New in 2.3.3

This is a bugfix and minor feature release.

Bugfixes

Since version 2.3.2 the following bugs have been fixed:

Bugs

  1. When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
  2. Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
  3. The batch script log file names have been fixed to be compatible with Windows.
  4. Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
  5. Support loading of legacy config files that have no personalization (textual inversion) section.
  6. An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
  7. Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.

Enhancements

  1. It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
  2. The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
  3. If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
  4. On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
  5. When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt      # or my-favorite-model.vae.safetensors

Installation / Upgrading

To install or upgrade to InvokeAI 2.3.3 please download the zip file at the bottom of the release notes (under "Assets"), unpack it, and then double-click to launch the script install.sh (Macintosh, Linux) or install.bat (Windows). Alternatively, you can open a command-line window and execute the installation script directly.

InvokeAI-installer-v2.3.3.zip

To update from 2.3.1 or 2.3.2 you may use the "update" option (choice 6) in the invoke.sh/invoke.bat launcher script and choose the option to update to 2.3.3.

Alternatively, you may use the installer zip file to update. When it asks you to confirm the location of the invokeai directory, type in the path to the directory you are already using, if not the same as the one selected automatically by the installer. When the installer asks you to confirm that you want to install into an existing directory, simply indicate "yes".

Developers and power users can upgrade to the current version by activating the InvokeAI environment and then using pip install --use-pep517 --upgrade InvokeAI . You may specify a particular version by adding the version number to the command, as in InvokeAI==2.3.3. To upgrade to an xformers version if you are not currently using xformers, use pip install --use-pep517 --upgrade InvokeAI[xformers]. You can see which versions are available by going to The PyPI InvokeAI Project Page

Known Bugs in 2.3.3

These are known bugs in the release.

  1. The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
  2. Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.

What's Changed

  • Enhance model autodetection during import by @lstein in #3043
  • Correctly load legacy checkpoint files built on top of SD 2.0/2.1 bases, such as Illuminati 1.1 by @lstein in #3058
  • Add support for the TI embedding file format used by negativeprompts.safetensors by @lstein in #3045
  • Keep torch version at 1.13.1 by @JPPhoto in #2985
  • Fix textual inversion documentation and code by @lstein in #3015
  • fix corrupted outputs/.next_prefix file by @lstein in #3020
  • fix batch generation logfile name to be compatible with Windows OS by @lstein in #3018
  • Security patch: Scan all pickle files, including VAEs; default to safetensor loading by @lstein in #3011
  • prevent infinite loop when launching developer's console by @lstein in #3016
  • Prettier console-based frontend for invoke.sh on Linux systems with "dialog" installed by Joshua Kimsey.
  • ROCM debugging recipe from @EgoringKosmos

Full Changelog: v2.3.2.post1...v2.3.3-rc1

Acknowledgements

Many thanks to @psychedelicious, @blessedcoolant (Vic), @JPPhoto (Jonathan Pollack), @ebr (Eugene Brodsky) @JoshuaKimsey, @EgoringKosmos, and our crack team of Discord moderators, @gogurtenjoyer and @whosawhatsis, for all their contributions to this release.

Full Changelog: v2.3.2.post1...v2.3.3