Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Safety checker on-off switch for stable diffusion pipeline #254

Closed
wants to merge 2 commits into from

Conversation

samrahimi
Copy link

Congrats to everyone for doing an amazing job on this library, it is so simple and well put together than even Colab use is painless. It means even a non programmer can run this and have complete control and unlimited free use of the best generative AI pipeline currently in mainstream use.

Or rather, they should be able to have complete control over it - but they are not. They are able to change all of the core settings when instantiating or calling the StableDiffusionPipeline - EXCEPT they have no control at all over the Safety Filter. Which is not even core, it is a different model, and it is not required for the pipeline to fulfill its main purpose, which is running inference against the Stable Diffusion model and generating images from a text prompt.

Remember: this is not a hosted inference API - in that case it would be totally different. This is a pipeline that is downloaded to the user's local hardware (which could be a Colab VM) and then executed locally, at their own risk.

It does not seem fair that I can choose whether or not to use the filter, but the average artist who makes a living from the work he produces with the software does not have that choice (I am not an artist, I am an engineer who likes art). But I know there are all sorts of considerations at play here, so I trust the maintainers to make the right call :)

Safety checker now can be switched on or off to allow for a greater diversity of use cases for the stable diffusion diffuser .
@patrickvonplaten
Copy link
Contributor

Good arguments @samrahimi! It is a bit tricky here indeed and I'd love to kick-off some discussion here where maybe everybody can leave their opinions.

I understand you opinion, and we have both upsides and downsides by having taken the decision to not have made it easy to disable the model. IMO the upsides clearly outweight the downsides though. A "in-good-faith-acting" artist can just run the same model (or just the space) over and over again with a different seed in case something gets unintentionally blocked.
So the downside is really limited IMO. For someone that doesn't want to code, that's what we've build a space for: https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions

The upside IMO is that instead with the safety checker always on only a low percentage of potential users knows how to disable the safety checker which does have an impact in limiting harmful usage.

cc'ing a couple of people here: @mmitchellai @yjernite @apolinario

@BIGJUN777
Copy link

Hi, there.
I met some problems when using the safety model. Please take a look at this issue #213 (comment). Thanks.

@yjernite
Copy link
Member

yjernite commented Sep 1, 2022

Agree with @patrickvonplaten that the cost-benefit analysis points toward leaving the code as is for now at the very least.

Note that this is a rapidly evolving technology - we're likely to see new models, new safety filters, and new ways of combining both in the near future.

We've also already seen some pretty damning uses of the model without the filter since its release. Having safety as a flag allows people to create harmful content on a simple colab, whereas requiring them to change the code disables this option.

How we handle safety filters should remain an open discussion, but I wouldn't advise making it fully optional right this moment.

cc also @IreneSolaiman

@patrickvonplaten
Copy link
Contributor

Closing this for now since there doesn't seem much activity on the discussion side

@theahura
Copy link

theahura commented Oct 6, 2022

A few points:

First, the question of what actually constitutes NSFW has, built into it, a whole host of implicit biases. As an example, I work with a large number of LGBT artists who's work and self expression is often banned as NSFW. This alone seriously limits my ability to use this library if there is no option to turn off the "safety" module; my collaborators would find such a module insensitive if not outright oppressive (and they should!)

Second, the safety checker isn't free. It costs compute (time, resources) to run. Someone using this library on their own machines should be allowed to make the cost tradeoff for themselves.

Third, I'm currently interested in using the diffusers library for a more involved project. However, I can't justify maintaining my own version of the library, especially if there are downstream security patches or bugfixes. As a result, though I would be able to 'modify the code', this is not actually a practical option. I'm sure there are many people who are in a similar boat as I am, and who as a result cannot use this library.


Responding to:

We've also already seen some pretty damning uses of the model without the filter since its release. Having safety as a flag allows people to create harmful content on a simple colab, whereas requiring them to change the code disables this option.

While I understand the desire to try and head off malicious uses of this technology, I think the approach of having an always-on censor backfires. First, doing so implicitly shifts responsibility for harmful outputs from the user to the maintainers, which I think opens this library up to a lot more risk than is necessary (see also: all of the discussion around editorial voice).

Second, on a more practical level, we don't have to implement a safety flag in a 'dumb' way. I would rather see a fix that simply hard-codes the safety checker if the user is running in Colab (https://stackoverflow.com/questions/53581278/test-if-notebook-is-running-on-google-colab). This solves @yjernite's concerns while also allowing maximal freedom to the end user.


Overall, I strongly encourage the maintainers to consider adding a flag that would allow users to turn the checker off.

@theahura
Copy link

theahura commented Oct 7, 2022

Follow up @patrickvonplaten

From reading your comment I want to clarify: you are currently suggesting that someone in my position should fork the library and pull out the safety checker manually? Or when you and @yjernite talk about modifying the code, are you somehow talking about modifying the inputs to the pipeline?

I poked around in the codebase and from a relatively deep pass through I couldn't see an easy way to mock the inputs to the pipeline in a way that would fool the system (you don't actually expose the SafetyChecker type so I can't build a class that extends it and just override the module in the pipeline kwargs). So I think I have to fork the entire library to modify the code the way I'd like, but I want to make sure that that is actually what your current best proposal is

@samrahimi
Copy link
Author

@theahura is absolutely right... because users need to turn off the safety checker and there is no clean way to do it, they are forking the whole pipeline and not keeping it up to date. I think a related issue is that there is both the Huggingface Diffusers pipeline and the original Stable Diffusion repo, and they are not being kept in sync.

Oh and then you got the stable-diffusion-webui forks that are falling way out of date. Case in point, the SD horde web ui which is incredibly useful (crowdsources the rendering job to users with GPUs) but is 59 commits behind the latest version. The UI portion does not concern me... it is the SD inference code being out of date that is an issue. https://github.com/db0/stable-diffusion-webui

I'm sure you guys are discussing this already but I feel like there needs to be a refactor so that the core SD inference pipeline has a single source of truth...

@patrickvonplaten
Copy link
Contributor

Hey @theahura and @samrahimi,

Thanks for your comments. We are currently discussing adding an easy way to disable the safety checker (probably via a config attribute) - I'll post a summary here soon!

@theahura
Copy link

Thanks, I appreciate that you're giving this ample consideration.

I found this article (https://forum.effectivealtruism.org/posts/fH5adhXF377Bt6fWj/public-facing-censorship-is-safety-theater-causing) informative about other harms that come from censorship that is pretending to be AI safety/ethics. Hopefully it helps in the discussion a bit as well!

@patrickvonplaten
Copy link
Contributor

Hey everybody,

After some internal discussion, we decided that we should indeed allow the option to disable the safety checker to prevent users from having to fork the repository. However, I want to make it crystal clear that we do not recommend disabling the safety checker and if you do so, please take the necessary ethical consideration of what disabling the safety checker could entail. E.g. if you build an application on top of diffusers where you by default disable the safety checker, you might accidentally create a tool that is more likely to generate harmful images and that might be exploited by users with bad intentions. We therefore hope you are considerate when disabling the safety checker (see PR: #815 for more details).

@patrickvonplaten
Copy link
Contributor

Pinging @yjernite @mmitchellai @giadilli @sashavor @natolambert @apolinario @robinrombach @patil-suraj @anton-l in case you would like to add something!

@theahura
Copy link

Thanks again for thinking this through! This is definitely a thorny issue with not so many obvious right answers, and so I'm glad that either way you're all approaching this with the seriousness it deserves

@harubaru
Copy link

harubaru commented Oct 21, 2022

Hello, is there a way to disable the logging message? I've disabled the safety checker to reduce VRAM usage as part of running evaluations for a custom training script on the model. The only issue is that it clutters up the console output.

@patrickvonplaten
Copy link
Contributor

Does this: https://huggingface.co/docs/diffusers/api/logging help? There should be some functionality to disable warning messages I believe

lstein added a commit to invoke-ai/InvokeAI that referenced this pull request Dec 22, 2022
1. On-the-fly conversion code can be found in the file ldm/invoke/ckpt_to_diffusers.py.

2. A new !optimize command has been added to the CLI. Should be ported to Web GUI.

User experience on the CLI is this:

```
invoke> !optimize /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
INFO: Converting legacy weights file /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt to optimized diffuser model.
      This operation will take 30-60s to complete.
Success. Optimized model is now located at /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
Writing new config file entry for sd-v1-4...

>> New configuration:
sd-v1-4:
  description: Optimized version of sd-v1-4
  format: diffusers
  path: /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4

OK to import [n]? y
>> Verifying that new model loads...
>> Current VRAM usage:  2.60G
>> Offloading stable-diffusion-2.1 to CPU
>> Loading diffusers model from /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
  | Using faster float16 precision
You have disabled the safety checker for <class 'ldm.invoke.generator.diffusers_pipeline.StableDiffusionGeneratorPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion \
license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances,\
 disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
  | training width x height = (512 x 512)
>> Model loaded in 3.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
Keep model loaded? [y]
```
lstein added a commit to invoke-ai/InvokeAI that referenced this pull request Jan 15, 2023
* initial commit of DiffusionPipeline class

* spike: proof of concept using diffusers for txt2img

* doc: type hints for Generator

* refactor(model_cache): factor out load_ckpt

* model_cache: add ability to load a diffusers model pipeline

and update associated things in Generate & Generator to not instantly fail when that happens

* model_cache: fix model default image dimensions

* txt2img: support switching diffusers schedulers

* diffusers: let the scheduler do its scaling of the initial latents

Remove IPNDM scheduler; it is not behaving.

* web server: update image_progress callback for diffusers data

* diffusers: restore prompt weighting feature

* diffusers: fix set-sampler error following model switch

* diffusers: use InvokeAIDiffuserComponent for conditioning

* cross_attention_control: stub (no-op) implementations for diffusers

* model_cache: let offload_model work with DiffusionPipeline, sorta.

* models.yaml.example: add diffusers-format model, set as default

* test-invoke-conda: use diffusers-format model
test-invoke-conda: put huggingface-token where the library can use it

* environment-mac: upgrade to diffusers 0.7 (from 0.6)

this was already done for linux; mac must have been lost in the merge.

* preload_models: explicitly load diffusers models

In non-interactive mode too, as long as you're logged in.

* fix(model_cache): don't check `model.config` in diffusers format

clean-up from recent merge.

* diffusers integration: support img2img

* dev: upgrade to diffusers 0.8 (from 0.7.1)

We get to remove some code by using methods that were factored out in the base class.

* refactor: remove backported img2img.get_timesteps

now that we can use it directly from diffusers 0.8.1

* ci: use diffusers model

* dev: upgrade to diffusers 0.9 (from 0.8.1)

* lint: correct annotations for Python 3.9.

* lint: correct AttributeError.name reference for Python 3.9.

* CI: prefer diffusers-1.4 because it no longer requires a token

The RunwayML models still do.

* build: there's yet another place to update requirements?

* configure: try to download models even without token

Models in the CompVis and stabilityai repos no longer require them. (But runwayml still does.)

* configure: add troubleshooting info for config-not-found

* fix(configure): prepend root to config path

* fix(configure): remove second `default: true` from models example

* CI: simplify test-on-push logic now that we don't need secrets

The "test on push but only in forks" logic was only necessary when tests didn't work for PRs-from-forks.

* create an embedding_manager for diffusers

* internal: avoid importing diffusers DummyObject

see huggingface/diffusers#1479

* fix "config attributes…not expected" diffusers warnings.

* fix deprecated scheduler construction

* work around an apparent MPS torch bug that causes conditioning to have no effect

* 🚧 post-rebase repair

* preliminary support for outpainting (no masking yet)

* monkey-patch diffusers.attention and use Invoke lowvram code

* add always_use_cpu arg to bypass MPS

* add cross-attention control support to diffusers (fails on MPS)

For unknown reasons MPS produces garbage output with .swap(). Use
--always_use_cpu arg to invoke.py for now to test this code on MPS.

* diffusers support for the inpainting model

* fix debug_image to not crash with non-RGB images.

* inpainting for the normal model [WIP]

This seems to be performing well until the LAST STEP, at which point it dissolves to confetti.

* fix off-by-one bug in cross-attention-control (#1774)

prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.

* refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary

* inpainting for the normal model. I think it works this time.

* diffusers: reset num_vectors_per_token

sync with 44a0055

* diffusers: txt2img2img (hires_fix)

with so much slicing and dicing of pipeline methods to stitch them together

* refactor(diffusers): reduce some code duplication amongst the different tasks

* fixup! refactor(diffusers): reduce some code duplication amongst the different tasks

* diffusers: enable DPMSolver++ scheduler

* diffusers: upgrade to diffusers 0.10, add Heun scheduler

* diffusers(ModelCache): stopgap to make from_cpu compatible with diffusers

* CI: default to diffusers-1.5 now that runwayml token requirement is gone

* diffusers: update to 0.10 (and transformers to 4.25)

* diffusers: use xformers when available

diffusers no longer auto-enables this as of 0.10.2.

* diffusers: make masked img2img behave better with multi-step schedulers

re-randomizing the noise each step was confusing them.

* diffusers: work more better with more models.

fixed relative path problem with local models.

fixed models on hub not always having a `fp16` branch.

* diffusers: stopgap fix for attention_maps_callback crash after recent merge

* fixup import merge conflicts

correction for 061c536

* test: add tests/inpainting inputs for masked img2img

* diffusers(AddsMaskedGuidance): partial fix for k-schedulers

Prevents them from crashing, but results are still hot garbage.

* fix --safety_checker arg parsing

and add note to diffusers loader about where safety checker gets called

* generate: fix import error

* CI: don't try to read the old init location

* diffusers: support loading an alternate VAE

* CI: remove sh-syntax if-statement so it doesn't crash powershell

* CI: fold strings in yaml because backslash is not line-continuation in powershell

* attention maps callback stuff for diffusers

* build: fix syntax error in environment-mac

* diffusers: add INITIAL_MODELS with diffusers-compatible repos

* re-enable the embedding manager; closes #1778

* Squashed commit of the following:

commit e4a956a
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:43:07 2022 +0100

    import new load handling from EmbeddingManager and cleanup

commit c4abe91
Merge: 032e856 1efc639
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:09:53 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers_with_textual_inversion_manager

commit 032e856
Merge: 8b4f0fe bc515e2
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:08:01 2022 +0100

    Merge remote-tracking branch 'upstream/dev/diffusers' into dev/diffusers_with_textual_inversion_manager

commit 1efc639
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:04:28 2022 +0100

    cleanup and add performance notes

commit e400f80
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:45:07 2022 +0100

    fix bug and update unit tests

commit deb9ae0
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:28:29 2022 +0100

    textual inversion manager seems to work

commit 162e025
Merge: cbad458 12769b3
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:58:03 2022 +0100

    Merge branch 'main' into feature_textual_inversion_mgr

commit cbad458
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:54:10 2022 +0100

    use position embeddings

commit 070344c
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:53:47 2022 +0100

    Don't crash CLI on exceptions

commit b035ac8
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:11:55 2022 +0100

    add missing position_embeddings

commit 12769b3
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:33:25 2022 +0100

    debugging why it don't work

commit bafb721
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:21:33 2022 +0100

    debugging why it don't work

commit 664a6e9
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit 8b4f0fe
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit ffbe1ab
Merge: 6e4dad6 023df37
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:37:31 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers

commit 023df37
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:36:54 2022 +0100

    cleanup

commit 05fac59
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:07:49 2022 +0100

    tweak error checking

commit 009f32e
Author: damian <null@damianstewart.com>
Date:   Thu Dec 15 21:29:47 2022 +0100

    unit tests passing for embeddings with vector length >1

commit beb1b08
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:39:09 2022 +0100

    more explicit equality tests when overwriting

commit 44d8a5a
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:30:13 2022 +0100

    wip textual inversion manager (unit tests passing for 1v embedding overwriting)

commit 417c2b5
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 12:30:55 2022 +0100

    wip textual inversion manager (unit tests passing for base stuff + padding)

commit 2e80872
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 10:57:57 2022 +0100

    wip new TextualInversionManager

* stop using WeightedFrozenCLIPEmbedder

* store diffusion models locally

- configure_invokeai.py reconfigured to store diffusion models rather than
  CompVis models
- hugging face caching model is used, but cache is set to ~/invokeai/models/repo_id
- models.yaml does **NOT** use path, just repo_id
- "repo_name" changed to "repo_id" to following hugging face conventions
- Models are loaded with full precision pending further work.

* allow non-local files during development

* path takes priority over repo_id

* MVP for model_cache and configure_invokeai

- Feature complete (almost)

- configure_invokeai.py downloads both .ckpt and diffuser models,
  along with their VAEs. Both types of download are controlled by
  a unified INITIAL_MODELS.yaml file.

- model_cache can load both type of model and switches back and forth
  in CPU. No memory leaks detected

TO DO:

  1. I have not yet turned on the LocalOnly flag for diffuser models, so
     the code will check the Hugging Face repo for updates before using the
     locally cached models. This will break firewalled systems. I am thinking
     of putting in a global check for internet connectivity at startup time
     and setting the LocalOnly flag based on this. It would be good to check
     updates if there is connectivity.

  2. I have not gone completely through INITIAL_MODELS.yaml to check which
     models are available as diffusers and which are not. So models like
     PaperCut and VoxelArt may not load properly. The runway and stability
     models are checked, as well as the Trinart models.

  3. Add stanzas for SD 2.0 and 2.1 in INITIAL_MODELS.yaml

REMAINING PROBLEMS NOT DIRECTLY RELATED TO MODEL_CACHE:

  1. When loading a .ckpt file there are lots of messages like this:

     Warning! ldm.modules.attention.CrossAttention is no longer being
     maintained. Please use InvokeAICrossAttention instead.

     I'm not sure how to address this.

  2. The ckpt models ***don't actually run*** due to the lack of special-case
     support for them in the generator objects. For example, here's the hard
     crash you get when you run txt2img against the legacy waifu-diffusion-1.3
     model:
```
     >> An error occurred:
     Traceback (most recent call last):
       File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 140, in main
           main_loop(gen, opt)
      File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 371, in main_loop
         gen.prompt2image(
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
	 results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
         image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
         pipeline_output = pipeline.image_from_embeddings(
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in __getattr__
         raise AttributeError("'{}' object has no attribute '{}'".format(
     AttributeError: 'LatentDiffusion' object has no attribute 'image_from_embeddings'
```

  3. The inpainting diffusion model isn't working. Here's the output of "banana
     sushi" when inpainting-1.5 is loaded:

```
    Traceback (most recent call last):
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
        results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
        image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
        pipeline_output = pipeline.image_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 301, in image_from_embeddings
        result_latents, result_attention_map_saver = self.latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 330, in latents_from_embeddings
        result: PipelineIntermediateState = infer_latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 185, in __call__
        for result in self.generator_method(*args, **kwargs):
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 367, in generate_latents_from_embeddings
        step_output = self.step(batched_t, latents, guidance_scale,
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 409, in step
        step_output = self.scheduler.step(noise_pred, timestep, latents, **extra_step_kwargs)
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_lms_discrete.py", line 223, in step
        pred_original_sample = sample - sigma * model_output
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1
```

* proper support for float32/float16

- configure script now correctly detects user's preference for
  fp16/32 and downloads the correct diffuser version. If fp16
  version not available, falls back to fp32 version.

- misc code cleanup and simplification in model_cache

* add on-the-fly conversion of .ckpt to diffusers models

1. On-the-fly conversion code can be found in the file ldm/invoke/ckpt_to_diffusers.py.

2. A new !optimize command has been added to the CLI. Should be ported to Web GUI.

User experience on the CLI is this:

```
invoke> !optimize /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
INFO: Converting legacy weights file /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt to optimized diffuser model.
      This operation will take 30-60s to complete.
Success. Optimized model is now located at /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
Writing new config file entry for sd-v1-4...

>> New configuration:
sd-v1-4:
  description: Optimized version of sd-v1-4
  format: diffusers
  path: /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4

OK to import [n]? y
>> Verifying that new model loads...
>> Current VRAM usage:  2.60G
>> Offloading stable-diffusion-2.1 to CPU
>> Loading diffusers model from /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
  | Using faster float16 precision
You have disabled the safety checker for <class 'ldm.invoke.generator.diffusers_pipeline.StableDiffusionGeneratorPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion \
license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances,\
 disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
  | training width x height = (512 x 512)
>> Model loaded in 3.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
Keep model loaded? [y]
```

* add parallel set of generator files for ckpt legacy generation

* generation using legacy ckpt models now working

* diffusers: fix missing attention_maps_callback

fix for 23eb80b

* associate legacy CrossAttention with .ckpt models

* enable autoconvert

New --autoconvert CLI option will scan a designated directory for
new .ckpt files, convert them into diffuser models, and import
them into models.yaml.

Works like this:

   invoke.py --autoconvert /path/to/weights/directory

In ModelCache added two new methods:

  autoconvert_weights(config_path, weights_directory_path, models_directory_path)
  convert_and_import(ckpt_path, diffuser_path)

* diffusers: update to diffusers 0.11 (from 0.10.2)

* fix vae loading & width/height calculation

* refactor: encapsulate these conditioning data into one container

* diffusers: fix some noise-scaling issues by pushing the noise-mixing down to the common function

* add support for safetensors and accelerate

* set local_files_only when internet unreachable

* diffusers: fix error-handling path when model repo has no fp16 branch

* fix generatorinpaint error

Fixes :
  "ModuleNotFoundError: No module named 'ldm.invoke.generatorinpaint'
   #1583 (comment)

* quench diffuser safety-checker warning

* diffusers: support stochastic DDIM eta parameter

* fix conda env creation on macos

* fix cross-attention with diffusers 0.11

* diffusers: the VAE needs to be tiling as well as the U-Net

* diffusers: comment on subfolders

* diffusers: embiggen!

* diffusers: make model_cache.list_models serializable

* diffusers(inpaint): restore scaling functionality

* fix requirements clash between numba and numpy 1.24

* diffusers: allow inpainting model to do non-inpainting tasks

* start expanding model_cache functionality

* add import_ckpt_model() and import_diffuser_model() methods to model_manager

- in addition, model_cache.py is now renamed to model_manager.py

* allow "recommended" flag to be optional in INITIAL_MODELS.yaml

* configure_invokeai now downloads VAE diffusers in advance

* rename ModelCache to ModelManager

* remove support for `repo_name` in models.yaml

* check for and refuse to load embeddings trained on incompatible models

* models.yaml.example: s/repo_name/repo_id

and remove extra INITIAL_MODELS now that the main one has diffusers models in it.

* add MVP textual inversion script

* refactor(InvokeAIDiffuserComponent): factor out _combine()

* InvokeAIDiffuserComponent: implement threshold

* InvokeAIDiffuserComponent: diagnostic logs for threshold

...this does not look right

* add a curses-based frontend to textual inversion

- not quite working yet
- requires npyscreen installed
- on windows will also have the windows-curses requirement, but not added
  to requirements yet

* add curses-based interface for textual inversion

* fix crash in convert_and_import()

- This corrects a "local variable referenced before assignment" error
  in model_manager.convert_and_import()

* potential workaround for no 'state_dict' key error

- As reported in huggingface/diffusers#1876

* create TI output dir if needed

* Update environment-lin-cuda.yml (#2159)

Fixing line 42 to be the proper order to define the transformers requirement: ~= instead of =~

* diffusers: update sampler-to-scheduler mapping

based on huggingface/diffusers#277 (comment)

* improve user exp for ckt to diffusers conversion

- !optimize_models command now operates on an existing ckpt file entry in models.yaml
- replaces existing entry, rather than adding a new one
- offers to delete the ckpt file after conversion

* web: adapt progress callback to deal with old generator or new diffusers pipeline

* clean-up model_manager code

- add_model() verified to work for .ckpt local paths,
  .ckpt remote URLs, diffusers local paths, and
  diffusers repo_ids

- convert_and_import() verified to work for local and
  remove .ckpt files

* handle edge cases for import_model() and convert_model()

* add support for safetensor .ckpt files

* fix name error

* code cleanup with pyflake

* improve model setting behavior

- If the user enters an invalid model name at startup time, will not
  try to load it, warn, and use default model
- CLI UI enhancement: include currently active model in the command
  line prompt.

* update test-invoke-pip.yml
- fix model cache path to point to runwayml/stable-diffusion-v1-5
- remove `skip-sd-weights` from configure_invokeai.py args

* exclude dev/diffusers from "fail for draft PRs"

* disable "fail on PR jobs"

* re-add `--skip-sd-weights` since no space

* update workflow environments
- include `INVOKE_MODEL_RECONFIGURE: '--yes'`

* clean up model load failure handling

- Allow CLI to run even when no model is defined or loadable.
- Inhibit stack trace when model load fails - only show last error
- Give user *option* to run configure_invokeai.py when no models
  successfully load.
- Restart invokeai after reconfiguration.

* further edge-case handling

1) only one model in models.yaml file, and that model is broken
2) no models in models.yaml
3) models.yaml doesn't exist at all

* fix incorrect model status listing

- "cached" was not being returned from list_models()
- normalize handling of exceptions during model loading:
   - Passing an invalid model name to generate.set_model() will return
     a KeyError
   - All other exceptions are returned as the appropriate Exception

* CI: do download weights (if not already cached)

* diffusers: fix scheduler loading in offline mode

* CI: fix model name (no longer has `diffusers-` prefix)

* Update txt2img2img.py (#2256)

* fixes to share models with HuggingFace cache system

- If HF_HOME environment variable is defined, then all huggingface models
  are stored in that directory following the standard conventions.
- For seamless interoperability, set HF_HOME to ~/.cache/huggingface
- If HF_HOME not defined, then models are stored in ~/invokeai/models.
  This is equivalent to setting HF_HOME to ~/invokeai/models

A future commit will add a migration mechanism so that this change doesn't
break previous installs.

* feat - make model storage compatible with hugging face caching system

This commit alters the InvokeAI model directory to be compatible with
hugging face, making it easier to share diffusers (and other models)
across different programs.

- If the HF_HOME environment variable is not set, then models are
  cached in ~/invokeai/models in a format that is identical to the
  HuggingFace cache.

- If HF_HOME is set, then models are cached wherever HF_HOME points.

- To enable sharing with other HuggingFace library clients, set
  HF_HOME to ~/.cache/huggingface to set the default cache location
  or to ~/invokeai/models to have huggingface cache inside InvokeAI.

* fixes to share models with HuggingFace cache system

    - If HF_HOME environment variable is defined, then all huggingface models
      are stored in that directory following the standard conventions.
    - For seamless interoperability, set HF_HOME to ~/.cache/huggingface
    - If HF_HOME not defined, then models are stored in ~/invokeai/models.
      This is equivalent to setting HF_HOME to ~/invokeai/models

    A future commit will add a migration mechanism so that this change doesn't
    break previous installs.

* fix error "no attribute CkptInpaint"

* model_manager.list_models() returns entire model config stanza+status

* Initial Draft - Model Manager Diffusers

* added hash function to diffusers

* implement sha256 hashes on diffusers models

* Add Model Manager Support for Diffusers

* fix various problems with model manager

- in cli import functions, fix not enough values to unpack from
  _get_name_and_desc()
- fix crash when using old-style vae: value with new-style diffuser

* rebuild frontend

* fix dictconfig-not-serializable issue

* fix NoneType' object is not subscriptable crash in model_manager

* fix "str has no attribute get" error in model_manager list_models()

* Add path and repo_id support for Diffusers Model Manager

Also fixes bugs

* Fix tooltip IT localization not working

* Add Version Number To WebUI

* Optimize Model Search

* Fix incorrect font on the Model Manager UI

* Fix image degradation on merge fixes - [Experimental]

This change should effectively fix a couple of things.

- Fix image degradation on subsequent merges of the canvas layers.
- Fix the slight transparent border that is left behind when filling the bounding box with a color.
- Fix the left over line of color when filling a bounding box with color.

So far there are no side effects for this. If any, please report.

* Add local model filtering for Diffusers / Checkpoints

* Go to home on modal close for the Add Modal UI

* Styling Fixes

* Model Manager Diffusers Localization Update

* Add Safe Tensor scanning to Model Manager

* Fix model edit form dispatching string values instead of numbers.

* Resolve VAE handling / edge cases for supplied repos

* defer injecting tokens for textual inversions until they're used for the first time

* squash a console warning

* implement model migration check

* add_model() overwrites previous config rather than merges

* fix model config file attribute merging

* fix precision handling in textual inversion script

* allow ckpt conversion script to work with safetensors .ckpts

Applied patch here:
huggingface/diffusers@beb932c

* fix name "args" is not defined crash in textual_inversion_training

* fix a second NameError: name 'args' is not defined crash

* fix loading of the safety checker from the global cache dir

* add installation step to textual inversion frontend

- After a successful training run, the script will copy learned_embeds.bin
  to a subfolder of the embeddings directory.
- User given the option to delete the logs and intermediate checkpoints
  (which together use 7-8G of space)
- If textual inversion training fails, reports the error gracefully.

* don't crash out on incompatible embeddings

- put try: blocks around places where the system tries to load an embedding
  which is incompatible with the currently loaded model

* add support for checkpoint resuming

* textual inversion preferences are saved and restored between sessions

- Preferences are stored in a file named text-inversion-training/preferences.conf
- Currently the resume-from-checkpoint option is not working correctly. Possible
  bug in textual_inversion_training.py?

* copy learned_embeddings.bin into right location

* add front end for diffusers model merging

- Front end doesn't do anything yet!!!!
- Made change to model name parsing in CLI to support ability to have merged models
  with the "+" character in their names.

* improve inpainting experience

- recommend ckpt version of inpainting-1.5 to user
- fix get_noise() bug in ckpt version of omnibus.py

* update environment*yml

* tweak instructions to install HuggingFace token

* bump version number

* enhance update scripts

- update scripts will now fetch new INITIAL_MODELS.yaml so that
  configure_invokeai.py will know about the diffusers versions.

* enhance invoke.sh/invoke.bat launchers

- added configure_invokeai.py to menu
- menu defaults to browser-based invoke

* remove conda workflow (#2321)

* fix `token_ids has shape torch.Size([79]) - expected [77]`

* update CHANGELOG.md with 2.3.* info

- Add information on how formats have changed and the upgrade process.
- Add short bug list.

Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: Wybartel-luxmc <37852506+Wybartel-luxmc@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: mickr777 <115216705+mickr777@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.