Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not enough on 4GB vRAM #14

Closed
p0mad opened this issue Aug 12, 2023 · 20 comments
Closed

Not enough on 4GB vRAM #14

p0mad opened this issue Aug 12, 2023 · 20 comments

Comments

@p0mad
Copy link

p0mad commented Aug 12, 2023

Hi, unlike the instructions that says it works on 4GB VRAM, but i'm using GTX 850 which is 4GB vram and i got not enough memory!
image
image

@Tigwin
Copy link

Tigwin commented Aug 12, 2023

I have a 3080FE and it has 10gb of ram. I think we need a medram setting cause it takes a looooong time to render some stuff. But if I keep it simple (low poly, "whale"), it renders about as fast as I'd expect.

@NoMansPC
Copy link

It eats up almost all of my RAM too, making my computer unusable. Sadly, for now, buying a new card is still the proper way to use SDXL.

@Tigwin
Copy link

Tigwin commented Aug 12, 2023

I just found out a temp fix for the vram issue on my desktop. Disable the video card then enable it, freed up almost half my vram (powershell)

Get-PnpDevice -FriendlyName "NVIDIA GeForce RTX 3080" | Disable-PnpDevice
Sleep -Seconds 5
Get-PnpDevice -FriendlyName "NVIDIA GeForce RTX 3080" | Enable-PnpDevice

@iLKke
Copy link

iLKke commented Aug 13, 2023

Fails for me the same way as for the OP.
I have 16GB RAM and an GTX 1050 Ti with 4GB VRAM
I noticed that it will gobble all of free space on C: that (cca 7GB) before failing due to not enough memory.

@sangshuduo
Copy link

I have 32GB RAM and an 3050Ti with 4GB VRAM. Wish it can be supported.

@AiSaurabhPatil
Copy link

AiSaurabhPatil commented Aug 14, 2023

It's working in my case
I have 16GB RAM and Nvidia GTX 1650 with 4GB VRAM
example :
2023-08-14_12-49-52_6503
2023-08-14_12-35-53_3803

Yeah but that's true , it make my system unusable !!

@iLKke
Copy link

iLKke commented Aug 14, 2023

OK as suspected, I needed to free enough disk space and now it will run.
The error message was misleading as it's not VRAM or RAM but disk space that was causing it to fail.

Curiously enough, it seems to only be using about 2GB of VRAM

@lllyasviel
Copy link
Owner

Both GTX 1650 and 850 has 4GB vram, but it seems that 1650 works and is "smarter" to use vram in more efficiency way.
Probably 850 is really very difficult to support. We will probably add some descriptions in the Readme to make it clearer

@mimze
Copy link

mimze commented Aug 15, 2023

16GB of RAM and 4GB of VRAM (GTX 950m). I get no errors but it just gets stuck when I hit Generate.

@game-alle
Copy link

I have an rtx 3050ti, and it's just stuck in this state and not progressing.
565437575
54365465

@xjdeng
Copy link

xjdeng commented Nov 13, 2023

Also fails on mine: GTX 960M, 4GB VRAM, 16 GB RAM:

image

D:\JJ\python\Fooocus_win64_2-1-791>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--preset', 'realistic']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.802
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
Total VRAM 4096 MB, total RAM 16250 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
Set vram state to: LOW_VRAM
Disabling smart memory management
Device: cuda:0 NVIDIA GeForce GTX 960M : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors
Request to load LoRAs [('SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25)] for model [D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors].
Loaded LoRA [D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for model [D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors] with 1052 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 10.48 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 7801272631270716136
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] cat, cinematic, complex, highly detailed, extremely, sharp focus, beautiful, stunning composition, symmetry, great colors, aesthetic, very inspirational, colorful, deep color, inspiring, original, full bright, lovely, cute, artistic, intricate, elegant, perfect light, fine detail, clear, ambient background, professional, creative, positive, amazing, pure, wonderful, unique
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] cat, very coherent, cute, cinematic, detailed, intricate, stunning, highly refined, epic composition, magical atmosphere, full color, elegant, luxury, amazing detail, professional, winning, thoughtful, calm, beautiful, unique, best, awesome, perfect, ambient light, shining, illuminated, translucent, fine, artistic, pure, positive, attractive, creative, vibrant
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
Preparation time: 41.61 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model

D:\JJ\python\Fooocus_win64_2-1-791>pause
Press any key to continue . . .

@lllyasviel
Copy link
Owner

Yes I should modify the Readme to say that the 4GB VRAM needs to have an GPU Arch to support float16.

But how can GTX 960M and SDXL be put in one same sentence

@prime0x2
Copy link

I have 8GB RAM and 1050TI 4GB. Can I use it?

@xjdeng
Copy link

xjdeng commented Nov 30, 2023

Yes I should modify the Readme to say that the 4GB VRAM needs to have an GPU Arch to support float16.

But how can GTX 960M and SDXL be put in one same sentence

Because

  1. SDXL and 4 GB VRAM were in the same sentence (or at least in the same repo)
  2. The 960M worked fine with fp16 1.5 models

Now it's entirely possible it's another issue and I can accept that. But it seems this repo is implying it'll work with all 4 GB Nvidia GPUs when I've found a countetexample: my old laptop's.

@AFOLcast
Copy link

AFOLcast commented Nov 30, 2023 via email

@xjdeng
Copy link

xjdeng commented Nov 30, 2023 via email

@rohanmandrekar
Copy link

Fails for me the same way as for the OP. I have 16GB RAM and an GTX 1050 Ti with 4GB VRAM I noticed that it will gobble all of free space on C: that (cca 7GB) before failing due to not enough memory.

same issue. any way I can prevent it from using space on C drive?

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 29, 2023

You can not as when VRAM is exhausted RAM is used and when RAM is (near) full the swap is used.
Preventing offloading from GPU is possible, but you'll not be able to run Fooocus then.
See https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

@mashb1t mashb1t closed this as completed Dec 29, 2023
xizzat pushed a commit to xizzat/Fooocus-Extra that referenced this issue Jan 14, 2024
* 原请求作者-预设切换-2.1.860

* 解决冲突
xizzat added a commit to xizzat/Fooocus-Extra that referenced this issue Jan 20, 2024
* 合并-预设切换-2.1.861 (lllyasviel#14)

* 原请求作者-预设切换-2.1.860

* 解决冲突

* 合并通配符顺序读取2.1.861 (lllyasviel#15)

* 通配符增强,切换顺序读取

通配符增强,通过勾选切换通配符读取方法,默认不勾选为随机读取一行,勾选后为按顺序读取,并使用相同的种子。

* 代码来自刁璐璐

* update

* Update async_worker.py

* 合并样式标签对齐css (lllyasviel#16)

* ui_wildcards_enhance

通配符界面文件

* Wildcards Artist

Wildcards Artist 捏人数据通配符文件

* Update webui.py

增加两句代码,插入通配符增强选项卡界面

* 中文翻译文档,包括了通配符的翻译

* Update xhox_bangs.txt

* Update xhox_hanfu.txt

* Wildcards Artist v 0.92

增加一些小功能

* Add files via upload

* Add files via upload

* 更新通配符数据翻译

* Wildcards Artist - 通配符艺术角色大师

确定名称

* Update ui_wildcards_enhance.py

* 为随机预设增加随机艺术家部分

* 将几个元素的默认值设定为其列表中的第一个

设定几个元素的默认值为其列表中的第一个,方便用户通过修改通配符文件内容的顺序来修改默认值

* 更新通配符数据能符合设定的默认值

* Add files via upload

* 功能基础完成,剩下的就是更新通配符数据和翻译了

功能基础完成,除了一个刷新选项按钮

* Add files via upload

* v0.93

* Add files via upload

* Add files via upload

* Add files via upload

* Update webui.py

* 通配符增强,切换顺序读取

通配符增强,通过勾选切换通配符读取方法,默认不勾选为随机读取一行,勾选后为按顺序读取,并使用相同的种子。

* 代码来自刁璐璐

* update

* 复刻Fooocus-feature-add-preset-selection

* 自用,合并更新

* Update async_worker.py

* 还原2.1.861

* 样式标签对齐css

* 最大宽度调整为50%-5px,从-15

* 尝试解决冲突

* Fooocus wildcards enhance (lllyasviel#17)

* ui_wildcards_enhance

通配符界面文件

* Wildcards Artist

Wildcards Artist 捏人数据通配符文件

* Update webui.py

增加两句代码,插入通配符增强选项卡界面

* 中文翻译文档,包括了通配符的翻译

* Update xhox_bangs.txt

* Update xhox_hanfu.txt

* Wildcards Artist v 0.92

增加一些小功能

* Add files via upload

* Add files via upload

* 更新通配符数据翻译

* Wildcards Artist - 通配符艺术角色大师

确定名称

* Update ui_wildcards_enhance.py

* 为随机预设增加随机艺术家部分

* 将几个元素的默认值设定为其列表中的第一个

设定几个元素的默认值为其列表中的第一个,方便用户通过修改通配符文件内容的顺序来修改默认值

* 更新通配符数据能符合设定的默认值

* Add files via upload

* 功能基础完成,剩下的就是更新通配符数据和翻译了

功能基础完成,除了一个刷新选项按钮

* Add files via upload

* v0.93

* 合并更新到2.1.861的通配符增强捏人

* 尝试解决冲突

---------

Co-authored-by: xhoxye <129571231+xhoxye@users.noreply.github.com>
@cp818
Copy link

cp818 commented Apr 15, 2024

Fails for me the same way as for the OP. I have 16GB RAM and an GTX 1050 Ti with 4GB VRAM I noticed that it will gobble all of free space on C: that (cca 7GB) before failing due to not enough memory.

same issue. any way I can prevent it from using space on C drive?

Same here too. no clear instructions?

@mashb1t
Copy link
Collaborator

mashb1t commented Apr 15, 2024

@cp818 let's do not revive a 4 months old thread, please open a discussion or a new issue and provide all necessary information such as terminal output, hardware specs etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests