You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks for your great work. I have a very question about training.
I'm trying to run training and getting an OutOfMemoryError using a (single) 32 GB GPU (V100). What do you use for training? Also, with your compute setup, approximately how long does training take?
Thanks so much!
The text was updated successfully, but these errors were encountered:
For metric fine-tuning, We use 4 NVIDIA A100 GPUs for training our largest model (BEiT-L). Training time on NYU (~25k samples, 5 epochs) on 4 A100s (40GB) is less than 2 hours.
Relative pre-training on 12 datasets (M12 from the paper) takes around 3-5 days on 8 RTX A6000-like GPUs. This gives us the MiDaS v3.1 models. Please refer to midas repo for more details.
Hello, thanks for your great work. I have a very question about training.
I'm trying to run training and getting an OutOfMemoryError using a (single) 32 GB GPU (V100). What do you use for training? Also, with your compute setup, approximately how long does training take?
Thanks so much!
The text was updated successfully, but these errors were encountered: