Save optimizer state along with model checkpoints #203
Labels
Alchemical Model
Alchemical model experimental architecture
Discussion
Issues to be discussed by the contributors
Priority: High
Critical issues needing immediate attention.
SOAP BPNN
SOAP BPNN experimental architecture
Not saving the optimizer state leads to some big jumps when using Adam, undoing most of the optimization done until that point. This is important when continuing training with the exact same dataset. In other cases (e.g. fine tuning), I believe it would still be beneficial (although we could have a flag to disable it).
The text was updated successfully, but these errors were encountered: