You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for building these models. I noticed that the training scripts for the MP pre-trained models use small batch sizes of 16. What was the reasoning for this choice?
My application requires training on graphs with hundreds to a few thousand nodes, and I was hoping that MACE's lack of explicit triplet angle computation (as in DimeNet or GemNet) would offer more favorable memory scaling. Any insights would be greatly appreciated.
Thanks,
Rees
The text was updated successfully, but these errors were encountered:
Hi @rees-c,
Sorry for the long delay in reply, the MACE github would be a more suitable place for your question.
The batch size has both an effect on the memory consumption but also on the training dynamics.
MACE can fit during training about 1000 nodes on a single GPU, A100. However we rarely go over 64 of batch size per GPU because we see degradation of accuracy past that.
Hi,
Thanks for building these models. I noticed that the training scripts for the MP pre-trained models use small batch sizes of 16. What was the reasoning for this choice?
My application requires training on graphs with hundreds to a few thousand nodes, and I was hoping that MACE's lack of explicit triplet angle computation (as in DimeNet or GemNet) would offer more favorable memory scaling. Any insights would be greatly appreciated.
Thanks,
Rees
The text was updated successfully, but these errors were encountered: