Open
Description
Is there an existing issue for this problem?
- I have searched the existing issues
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
GTX 1660
GPU VRAM
6GB
Version number
4.2.6
Browser
Firefox
Python dependencies
No response
What happened
I am using SD 1.5 models in safetensor format without converting them to diffusers.
Image generation fails with a server error when starting InvokeAI offline.
Image generation starts only when connected to the internet for the first generation however subsequent generations can work offline until switching models.
The error reappears when switching models while offline.
What you expected to happen
InvokeAI should work offline.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
Activity
psychedelicious commentedon Jul 16, 2024
With single-file (checkpoint) loading, diffusers still needs access to the models' configuration files. Previously, when we converted models, we used a local copy of these config files. With single-file loading, we are no longer referencing the local config files so diffusers is downloading them.
The latest diffusers release revises the single-file loading logic. I think we'll need to upgrade to the latest version, then review the new API to see what our options are.
someaccount1234 commentedon Jul 18, 2024
This makes it completely ONLINE ONLY!
The configs folder is right there local, ready to be used lol!
Wasted a good hour+ trying to fix it.
Please fix this, unusable until then!
psychedelicious commentedon Jul 19, 2024
@lstein Forgot to tag you - I think we should be able to fix this up pretty easily.
someaccount1234 commentedon Jul 28, 2024
Still the same error!
Cannot use offline at all!
InvokeAI demands internet connection to download config files that are already local! Every time you change the model!
setting 'legacy_config_dir' in 'invokeai.yaml' doesn't help, it still demands internet.
This big should be retitled to 'redundant yaml downloads , internet required'
jameswan commentedon Jul 30, 2024
psychedelicious commentedon Jul 30, 2024
@jameswan That's an entirely different problem. Please create your own issue. Note that Invoke v2 is ancient.
psychedelicious commentedon Jul 30, 2024
@someaccount1234 Yes, this is still a problem. We will close this issue when it is resolved.
TobiasReich commentedon Jul 31, 2024
If some more StackTrace is needed, I provided it in my (duplicated) issue:
#6702
psychedelicious commentedon Jul 31, 2024
Thanks @TobiasReich I saw that.
This isn't a mysterious issue, the cause is very clear.
I experimented the other day with providing the config files we already have on disc but
diffusers
couldn't load the tokenizer or text encoder. It's not obvious to me why.It doesn't matter anyways, though, because
diffusers
just refactored the API we are using to load models. Whatever issue I'm running into may well no longer exist. So we need to update thediffusers
dependency (one of our core deps), adapt to some other changes they have made, and then figure out how to provide the config files required.MOzhi327 commentedon Aug 7, 2024
Now, in the infinite canvas, every time a new image is uploaded, an internet connection is required. Maybe it's time to go back to the old version.
psychedelicious commentedon Aug 7, 2024
@MOzhi327 No, that's not how it works. There's no internet connection needed when using canvas. What makes you think an internet connection is required?
MOzhi327 commentedon Aug 7, 2024
@psychedelicious Thank you for the reply. On my side, if the VPN is turned off, there is no way to load the model, as follows.

When I turned on the VPN and generated the image, I could continue to generate without relying on the VPN. Today, I once turned off the VPN. Without adjusting any parameters, when generating the image, a prompt of network connection failure appeared again. (I just tested it again. I can still continue to generate after turning off the VPN. Maybe it was caused by other reasons before. Sorry.)
Anyway, every time the model is loaded, an internet connection is required. This is very inconvenient for me. Because using a VPN will cause problems with the use of my other software, so I temporarily consider using the old version first.
psychedelicious commentedon Aug 7, 2024
@MOzhi327 Ok, thanks for clarifying. Yes, we know about the internet connectivity issue and will fix it.
MOzhi327 commentedon Aug 7, 2024
@psychedelicious Thank you very much
psychedelicious commentedon Aug 12, 2024
The problem was introduced when we implemented single-file loading in v4.2.6 on 15 July 2024. We have a few large projects that are taking all contributors' time and which are both resource and technical blockers to resolving this issue.
You do not need to use single file loading in the first place. You can convert your checkpoint/safetensors models to diffusers before going offline (button in the model manager) and then there's no internet connection needed to generate.
MOzhi327 commentedon Aug 13, 2024
@psychedelicious Thank you! It is useful, but if it is an external directory model, it will cost more additional disk space. For someone like me who mainly uses one or two models, it may not be a useful way for others who need to use many models.
webtv123 commentedon Aug 13, 2024
THANK YOU.
Asherathe commentedon Apr 27, 2025
As of today, I've suddenly begun having this issue due to an unspecified SSL error with HuggingFace, even though I have a working internet connection. I had no idea internet access was required when using safetensor format. I really wish this was communicated more clearly.