Skip to content
This repository has been archived by the owner on Dec 13, 2023. It is now read-only.

Question about CPU OOM. #29

Closed
MZhao-ouo opened this issue Jul 2, 2022 · 2 comments
Closed

Question about CPU OOM. #29

MZhao-ouo opened this issue Jul 2, 2022 · 2 comments

Comments

@MZhao-ouo
Copy link

I use the following command to train on multiscale datasets, but get "killed" output.

bash ./scripts/train_multiblender.sh

I have generated multiscale datesets, and changed the correct path in ./scripts/train_multiblender.sh. It works well on ./scripts/train_blender.sh with original datasets.

My computer has 16G MEM and 4G SWAP and I'd like to know the minimum requirements.

Thanks.

@Leviosaaaa
Copy link

I have about the same setup (16G MEM and 2G SWAP) as yours and encountered the same "killed" problem. The problem was then solved when I set batch_size = 512.

@MZhao-ouo
Copy link
Author

MZhao-ouo commented Jul 12, 2022

I have about the same setup (16G MEM and 2G SWAP) as yours and encountered the same "killed" problem. The problem was then solved when I set batch_size = 512.

Thanks.
I upgraded MEM to 32G and it is running successfully now. It occupied 31.5G MEM when training.

But I obversed a weird phenomenon. When I try running ./scripts/train_multiblender.sh on a 64G MEM computer, the program occupied up to 55G MEM. It's bizarre!

Anyway, thank you for your method and I will close this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants