Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory not fixed when training #81

Closed
ProBURN-E opened this issue Jul 4, 2024 · 1 comment
Closed

GPU memory not fixed when training #81

ProBURN-E opened this issue Jul 4, 2024 · 1 comment
Labels
question Further information is requested solved issue has been solved

Comments

@ProBURN-E
Copy link

Thank you for your great work!
Why does the GPU memory change all the time during training instead of being fixed all the time?
Except for changing scale to 2, I use the default configs.

@neosr-project
Copy link
Owner

neosr-project commented Jul 4, 2024

Hi @ProBURN-E, thanks. VRAM usage varies due to multiple factors, it's normal. There are internal bottlenecks that causes it, as well as different network designs that influences it. For example, networks like as compact and span have a stable gpu-util and vram usage, as opposed to omnisr hich has more fluctuations.
Another reason could be slow dataloading times. If you have an SSD, it is recommended that you put your dataset in it, as it will avoid high reading times from HDD.

ps: another factor could be the image resolution. If your dataset has big images, it will take more time to decode, so it creates a bottleneck. It is recommended that you tile your images to a lower resolution, such as 512x512px.

@neosr-project neosr-project added question Further information is requested solved issue has been solved labels Jul 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested solved issue has been solved
Projects
None yet
Development

No branches or pull requests

2 participants