You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 13, 2023. It is now read-only.
I use the following command to train on multiscale datasets, but get "killed" output.
bash ./scripts/train_multiblender.sh
I have generated multiscale datesets, and changed the correct path in ./scripts/train_multiblender.sh. It works well on ./scripts/train_blender.sh with original datasets.
My computer has 16G MEM and 4G SWAP and I'd like to know the minimum requirements.
Thanks.
The text was updated successfully, but these errors were encountered:
I have about the same setup (16G MEM and 2G SWAP) as yours and encountered the same "killed" problem. The problem was then solved when I set batch_size = 512.
I have about the same setup (16G MEM and 2G SWAP) as yours and encountered the same "killed" problem. The problem was then solved when I set batch_size = 512.
Thanks.
I upgraded MEM to 32G and it is running successfully now. It occupied 31.5G MEM when training.
But I obversed a weird phenomenon. When I try running ./scripts/train_multiblender.sh on a 64G MEM computer, the program occupied up to 55G MEM. It's bizarre!
Anyway, thank you for your method and I will close this issue.
I use the following command to train on multiscale datasets, but get "killed" output.
I have generated multiscale datesets, and changed the correct path in
./scripts/train_multiblender.sh
. It works well on./scripts/train_blender.sh
with original datasets.My computer has 16G MEM and 4G SWAP and I'd like to know the minimum requirements.
Thanks.
The text was updated successfully, but these errors were encountered: