-
-
Notifications
You must be signed in to change notification settings - Fork 535
Fooocus v 1.0.11 says update available, but never actually sucessfully updates #144
Description
Fooocus works but its the old version.
Whenever I look at the Packages page it says there is an update but, after supposedly updating it still only stays at 1.0.11 w lots of messages:
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Fooocus version: 1.0.11
Inference Engine exists.
Inference Engine checkout finished.
Total VRAM 11264 MB, total RAM 32696 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
Using xformers cross attention
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 2048 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 2048 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
it goes on and on with same or similar stuff, then finally ends with:
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is None and using 12 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 768, context_dim is 1280 and using 12 heads.
model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Do these message clarify some error that explains why it never updates to the version 2.xxx ?
On my other computer Fooocus updated correctly from 1.xxx to 2.xxxx just fine.
Thanks for any help.