You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While the recent input pipeline rework has already greatly reduced memory consumption, we can defuinitely decrease it further.
The current major RAM hog is the dict of ASE atoms, not the data pipeline. We can avoid this by reading the file on the fly in the generator instead of all at once, thereby avoiding the need to keep all atoms in memory.
The text was updated successfully, but these errors were encountered:
Although this could certainly be done, the current PBP dataset allows for training of even the largest available datasets with 64 GB of RAM, which can be expected from a modern workstation.
While the recent input pipeline rework has already greatly reduced memory consumption, we can defuinitely decrease it further.
The current major RAM hog is the dict of ASE atoms, not the data pipeline. We can avoid this by reading the file on the fly in the generator instead of all at once, thereby avoiding the need to keep all atoms in memory.
The text was updated successfully, but these errors were encountered: