You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
What happened?
When attempting to create a new .pt embedding via Train > Create Embedding, this error occurs:
Traceback (most recent call last):
File "C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\reallysafe.py", line 146, in load_with_extra
check_pt(filename, extra_handler)
File "C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\reallysafe.py", line 93, in check_pt
check_zip_filenames(filename, z.namelist())
File "C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\reallysafe.py", line 85, in check_zip_filenames
raise Exception(f"bad file inside {filename}: {name}")
Exception: bad file inside C:\Stuff\AI\Stable Diffusion\embeddings\myNewEmbedding.pt: myNewEmbedding/byteorder
The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument.
Error loading embedding myNewEmbedding.pt:
Traceback (most recent call last):
File "C:\Stuff\AI\Stable Diffusion\modules\textual_inversion\textual_inversion.py", line 210, in load_from_dir
self.load_from_file(fullfn, fn)
File "C:\Stuff\AI\Stable Diffusion\modules\textual_inversion\textual_inversion.py", line 168, in load_from_file
if 'string_to_param' in data:
TypeError: argument of type 'NoneType' is not iterable
The newly created embedding does not show up in the Train > Train > Embedding dropdown, and thus cannot be used for training. It is created on the filesystem though.
This started happening after I updated from Torch 2.0 to Torch 2.1.0-dev.
With Torch 2.0 --opt-sdp-attention:
I was getting a RuntimeError: Expected is_sm80 to be true, but got false. error during training.
Disabling --opt-sdp-attention resolved that is_sm80 error but my max batch size was reduced from 16 to 3.
The batch size reduction was not ideal, so then I manually updated Torch to 2.1.0-dev.
With Torch 2.1.0.dev20230506+cu117 --opt-sdp-attention:
Fixed the is_sm80 error when using --opt-sdp-attention, but now I get the embedding related error above.
Using --disable-safe-unpickle as the error message suggested fixes the issue and allows me to create and train embeddings again. But obviously that is not an ideal solution for the long term.
Steps to reproduce the problem
Train > Create Embedding
Enter name for embedding, click Create Embedding
Error message shows up in console. Newly created embedding does not show up in the Train > Train > Embeddings dropdown.
What should have happened?
Creating a new embedding shouldn't create the unsafe pickle warning+error.
venv "C:\Stuff\AI\Stable Diffusion\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89
Installing requirements
Fetching updates for midas...
Checking out commit for midas with hash: 1645b7e...
Installing sd-dynamic-prompts requirements.txt
loading Smart Crop reqs from C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\requirements.txt
Checking Smart Crop requirements.
Launching Web UI with arguments: --opt-sdp-attention
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\Stuff\AI\Stable Diffusion\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
ControlNet v1.1.116
ControlNet v1.1.116
Loading weights [20bae33336] from C:\Stuff\AI\Stable Diffusion\models\Stable-diffusion\02~sd1.5\01~photoreal\realisticVisionV13_v13.safetensors
Creating model from config: C:\Stuff\AI\Stable Diffusion\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying scaled dot product cross attention optimization.
Textual inversion embeddings loaded(97): redacted :)
Model loaded in 4.4s (load weights from disk: 0.4s, create model: 0.5s, apply weights to model: 0.6s, apply half(): 0.8s, move model to device: 0.7s, load textual inversion embeddings: 1.4s).
1920 1080
1030
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.Startup time: 20.7s (import torch: 5.2s, import gradio: 1.3s, import ldm: 0.6s, other imports: 1.2s, load scripts: 3.0s, load SD checkpoint: 4.5s, create ui: 4.3s, gradio launch: 0.4s).Error verifying pickled file from C:\Stuff\AI\Stable Diffusion\embeddings\myNewEmbedding.pt:Traceback (most recent call last): File "C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\reallysafe.py", line 146, in load_with_extra check_pt(filename, extra_handler) File "C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\reallysafe.py", line 93, in check_pt check_zip_filenames(filename, z.namelist()) File "C:\Stuff\AI\Stable Diffusion\extensions\sd_smartprocess\reallysafe.py", line 85, in check_zip_filenames raise Exception(f"bad file inside {filename}: {name}")Exception: bad file inside C:\Stuff\AI\Stable Diffusion\embeddings\myNewEmbedding.pt: myNewEmbedding/byteorderThe file may be malicious, so the program is not going to read it.You can skip this check with --disable-safe-unpickle commandline argument.Error loading embedding myNewEmbedding.pt:Traceback (most recent call last): File "C:\Stuff\AI\Stable Diffusion\modules\textual_inversion\textual_inversion.py", line 210, in load_from_dir self.load_from_file(fullfn, fn) File "C:\Stuff\AI\Stable Diffusion\modules\textual_inversion\textual_inversion.py", line 168, in load_from_fileif'string_to_param'in data:TypeError: argument of type'NoneType' is not iterable
Additional information
No response
The text was updated successfully, but these errors were encountered:
Disabled sd_smartprocess extension and got nearly the same error with sd_dreambooth_extension (even though it was disabled) so I deleted that from my filesystem entirely.
Now I get this similar error when attempting to create embedding with name "testing":
Error verifying pickled file from C:\Stuff\AI\Stable Diffusion\embeddings\testing.pt:
Traceback (most recent call last):
File "C:\Stuff\AI\Stable Diffusion\modules\safe.py", line 136, in load_with_extra
check_pt(filename, extra_handler)
File "C:\Stuff\AI\Stable Diffusion\modules\safe.py", line 83, in check_pt
check_zip_filenames(filename, z.namelist())
File "C:\Stuff\AI\Stable Diffusion\modules\safe.py", line 75, in check_zip_filenames
raise Exception(f"bad file inside {filename}: {name}")
Exception: bad file inside C:\Stuff\AI\Stable Diffusion\embeddings\testing.pt: testing/byteorder
The file may be malicious, so the program is not going to read it.
You can skip this check with --disable-safe-unpickle commandline argument.
Error loading embedding testing.pt:
Traceback (most recent call last):
File "C:\Stuff\AI\Stable Diffusion\modules\textual_inversion\textual_inversion.py", line 210, in load_from_dir
self.load_from_file(fullfn, fn)
File "C:\Stuff\AI\Stable Diffusion\modules\textual_inversion\textual_inversion.py", line 168, in load_from_file
if 'string_to_param' in data:
TypeError: argument of type 'NoneType' is not iterable
This is with set COMMANDLINE_ARGS= --opt-sdp-attention
Is there an existing issue for this?
What happened?
When attempting to create a new .pt embedding via Train > Create Embedding, this error occurs:
The newly created embedding does not show up in the Train > Train > Embedding dropdown, and thus cannot be used for training. It is created on the filesystem though.
This started happening after I updated from Torch 2.0 to Torch 2.1.0-dev.
With Torch 2.0 --opt-sdp-attention:
I was getting a
RuntimeError: Expected is_sm80 to be true, but got false.
error during training.Disabling --opt-sdp-attention resolved that is_sm80 error but my max batch size was reduced from 16 to 3.
The batch size reduction was not ideal, so then I manually updated Torch to 2.1.0-dev.
With Torch 2.1.0.dev20230506+cu117 --opt-sdp-attention:
Fixed the is_sm80 error when using --opt-sdp-attention, but now I get the embedding related error above.
Using
--disable-safe-unpickle
as the error message suggested fixes the issue and allows me to create and train embeddings again. But obviously that is not an ideal solution for the long term.Steps to reproduce the problem
What should have happened?
Creating a new embedding shouldn't create the unsafe pickle warning+error.
Commit where the problem happens
5ab7f21
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Microsoft Edge
Command Line Arguments
set COMMANDLINE_ARGS= --opt-sdp-attention
List of extensions
Config-Presets
Keep-this-prompt-for-later
Stable-Diffusion-Webui-Civitai-Helper
a1111-sd-webui-locon
depthmap2mask
sd-dynamic-prompts
sd-webui-additional-networks
sd-webui-controlnet
sd_smartprocess
stable-diffusion-webui-composable-lora
stable-diffusion-webui-images-browser
stable-diffusion-webui-two-shot
ultimate-upscale-for-automatic1111
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: