How to fix a safetensors/ckpt model for a "state_dict" error #794
Replies: 9 comments 3 replies
-
Results in: Error merging checkpoints: unhashable type: 'list' Error loading/saving model file: |
Beta Was this translation helpful? Give feedback.
-
This fixes it! |
Beta Was this translation helpful? Give feedback.
-
Thanks !! I am using https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb and this method works perfectly to generate a model that can be dreambooth'ed. |
Beta Was this translation helpful? Give feedback.
-
wow thanks! |
Beta Was this translation helpful? Give feedback.
-
Since the SD 1.5 7Gb ir a large file, and my B model is also a large file.. I'm getting CUDA error using Colab. |
Beta Was this translation helpful? Give feedback.
-
I had to do something similar for one of mine, however, I actually blended together 3 models I trained to get rid of the issue layers. Merging 2 still has the error but the third with add difference and anything as model C worked out. Could be a secondary method, the add difference. |
Beta Was this translation helpful? Give feedback.
-
I had the same error when I was using diffusers_stablediff_conversion. I fixed the conversion there: ratwithacompiler/diffusers_stablediff_conversion#6 I don't know if it's related to your problem |
Beta Was this translation helpful? Give feedback.
-
Should I also bake in the standard VAE if I'm using the same good source as you in your example? |
Beta Was this translation helpful? Give feedback.
-
I am having a problem with a model i made, its a ckpt file and its saying its corrupt but i did not do anything to it and it worked one day and stopped the next.
changing setting sd_model_checkpoint to People\caseym_model.ckpt: AttributeError ANYONE HAVE A CLUE how to read what may be wrong and if i can fix it? |
Beta Was this translation helpful? Give feedback.
-
When you try to train a safetensor, or a ckpt that has been merged with a safetensor, or one that was converted, you may see this:
Wanted to let you all know about my workaround, since I've seen a number of times now. Go to the Checkpoint Merger in the UI.
The resulting ckpt file should now allow you to train on the model. I ran a few batches to ensure that the output result was visually similar. I'd say it's ~99.9% similar, you may notice slight differences.
What's interesting is that putting the safetensors file in the primary slot and creating a ckpt with a WS value of 0 will still give you errors. Even doing a "full overwrite" with a WS value of 1 with the trainable SD1.5 ckpt will still produce errors.
There appears to be something about putting in a trainable ckpt model in as the Primary value and doing a full copy of the safetensor's values to it that keeps the conversion "clean".
Let me know if you have a theory on why this "fixes" the problem, I'm curious. And also if it solved the problem for you.
Beta Was this translation helpful? Give feedback.
All reactions