You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
The text was updated successfully, but these errors were encountered:
After conda activating the "ldm" environment, and changing to the stable-diffusion directory run:
python scripts\preload_models.py
This only needs to be done once. The reason for this is that the original source code downloads these models on a just-in-time basis. However, my university's GPU systems are firewalled and so this step fails. To make it easier for everyone, I decided to have a mandatory preload which caches the required files locally. (It also has the effect of removing about two pages worth of warnings when the CLIP tokenizer is loaded!)
Let me know if this solution does or doesn't work for you. I'll keep the issue open until then.
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
The text was updated successfully, but these errors were encountered: