Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't load tokenizer for 'openai/clip-vit-large-patch14' #90

Open
CaveHEX opened this issue Aug 25, 2022 · 12 comments
Open

Can't load tokenizer for 'openai/clip-vit-large-patch14' #90

CaveHEX opened this issue Aug 25, 2022 · 12 comments

Comments

@CaveHEX
Copy link

CaveHEX commented Aug 25, 2022

I haven't seen this issue mentioned here, nor can I find help on the matter, or even after trying to debug it on my own, any suggestion would be welcome, hoping the issue is on my side.

(ldm) D:\DATA\AI\STABLE_DIFFUSION\stable-diffusion>python scripts\orig_scripts\txt2img.py --prompt "hello world" --n_samples 1 --W 200 --H 200
Global seed set to 42
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Traceback (most recent call last):
  File "scripts\orig_scripts\txt2img.py", line 318, in <module>
    main()
  File "scripts\orig_scripts\txt2img.py", line 197, in main
    model = load_model_from_config(config, f"{opt.ckpt}")
  File "scripts\orig_scripts\txt2img.py", line 35, in load_model_from_config
    model = instantiate_from_config(config.model)
  File "d:\data\ai\stable_diffusion\stable-diffusion\ldm\util.py", line 83, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "d:\data\ai\stable_diffusion\stable-diffusion\ldm\models\diffusion\ddpm.py", line 462, in __init__
    self.instantiate_cond_stage(cond_stage_config)
  File "d:\data\ai\stable_diffusion\stable-diffusion\ldm\models\diffusion\ddpm.py", line 520, in instantiate_cond_stage
    model = instantiate_from_config(config)
  File "d:\data\ai\stable_diffusion\stable-diffusion\ldm\util.py", line 83, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "d:\data\ai\stable_diffusion\stable-diffusion\ldm\modules\encoders\modules.py", line 149, in __init__
    self.tokenizer = CLIPTokenizer.from_pretrained(version,local_files_only=True)
  File "C:\Users\Matthew\anaconda3\envs\ldm\lib\site-packages\transformers\tokenization_utils_base.py", line 1768, in from_pretrained
    raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
@vallabhnatu
Copy link

vallabhnatu commented Aug 26, 2022

File "d:\data\ai\stable_diffusion\stable-diffusion\ldm\modules\encoders\modules.py", line 149, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version,local_files_only=True)

Remove local_files_only=True so that the transformers can be downloaded from huggingface if not locally present.

@kevinhower
Copy link

had this error this morning and this was the perfect fix to my problem. I don't know what caused it in the first place but I am back up and running. :)

enzymezoo-code added a commit to enzymezoo-code/stable-diffusion that referenced this issue Oct 5, 2022
@ZIAS0112
Copy link

I am Having the same Issue But Unable to solve this problem. Can You Help me out with this

File "C:\Users\ZIAS\stable-diffusion-webui\launch.py", line 295, in
start()
File "C:\Users\ZIAS\stable-diffusion-webui\launch.py", line 290, in start
webui.webui()
File "C:\Users\ZIAS\stable-diffusion-webui\webui.py", line 132, in webui
initialize()
File "C:\Users\ZIAS\stable-diffusion-webui\webui.py", line 62, in initialize
modules.sd_models.load_model()
File "C:\Users\ZIAS\stable-diffusion-webui\modules\sd_models.py", line 308, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "C:\Users\ZIAS\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\Users\ZIAS\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "C:\Users\ZIAS\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "C:\Users\ZIAS\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\Users\ZIAS\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 99, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "C:\Users\ZIAS\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1768, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

@Grin088
Copy link

Grin088 commented Dec 12, 2022

Try to restarting your computer and after launch stable defusion again. After a reboot this problem was fixed for me.

@ZIAS0112
Copy link

Try to restarting your computer and after launch stable defusion again. After a reboot this problem was fixed for me.

I am trying it from past 3 days and still not finding a solution. Nor even in Internet. Asked so many creators. But no response yet

@fredi-python
Copy link

Try to restarting your computer and after launch stable defusion again. After a reboot this problem was fixed for me.

thx this worked for me!

@lucia-super
Copy link

restarted is not working for me, sadly

@brickjjj
Copy link

brickjjj commented Jul 3, 2023

Delete the folder "venv" ,it means reinstall venv. and restart ./webui.sh ,it works for me

@EstherLee1995
Copy link

I have downloaded the 'openai/clip-vit-large-patch14 file in huggingface local. I want to know where to put it. Can anyone knows?

@383819640
Copy link

I have downloaded the 'openai/clip-vit-large-patch14 file in huggingface local. I want to know where to put it. Can anyone knows?

the same ask, where to put it

@hopefullykkkk
Copy link

我已经在 huggingface 本地下载了 'openai/clip-vit-large-patch14 文件。我想知道把它放在哪里。谁能知道?

同样的问,放在哪里

建议查看这篇博客https://blog.csdn.net/SuperB666/article/details/132826492

@luojiong
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests