LAMMPS:atoms move too far #4802
Unanswered
Potatomashh
asked this question in
Q&A
Replies: 1 comment
-
It is suggest to investigate the model's accuracy during the simulation by using the model deviation, https://docs.deepmodeling.com/projects/deepmd/en/master/test/model-deviation.html and if necessary, generate training data to enhance the model's generalization by using the concurrent learning method. the model compression saves memory, thus enables simulations with larger systems with the same hardware. the compressed models' accuracy is usually the same as the original model, as long as the configurations in the simulations are "covered" by the training data, otherwise the accuracy is not guaranteed. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I’m currently using LAMMPS to simulate the selective laser melting (SLM) process of an Al-Mn-Mg-Sc-Zr alloy. The interatomic interactions are described by a DeepMD-trained potential (model-compress.pb). The simulation setup includes a substrate and multiple powder particles, and laser scanning is implemented via fix heat combined with a moving region.
However, I’ve observed that some atoms—especially Mg atoms—exhibit abnormally large displacements, with some even flying out of the simulation domain, which is clearly unphysical.
I visualized the simulation using OVITO, and the abnormal motion is illustrated in the figure below.
As a temporary workaround, I use dynamic regions to delete atoms that have moved too far every 5000 steps (~5 ps). But I understand this approach might only be hiding a deeper issue. Below is my input (.in) file:
So I would like to ask:
What could be the root cause of this unphysical atomic motion? Could it be due to poor training of the Deep Potential, especially for Mg-related interactions?
Is it physically or methodologically reasonable to delete atoms in this way during an SLM simulation?
Are there better ways to stabilize the system, such as improving temperature control, adjusting the timestep, tuning laser parameters, or retraining the DP model?
I have another related question I’d like to ask: the DeepMD model I’m using was provided by someone else in the form of a model.pth file. However, when I try to use this model directly in LAMMPS as the potential function—even with four GPUs—I run into out-of-memory errors.
To work around this, I first converted the .pth file to .pb format using the command dp convert-backend model.pth model.pb, and then compressed it with dp compress -i model.pb -o model-compress.pb. After that, I was able to use the model-compress.pb file in LAMMPS without GPU memory issues.
My question is: how accurate are the results when using this compressed model? Does this conversion and compression process compromise the precision or fidelity of the original .pth model in any significant way?
Any insights or suggestions would be greatly appreciated. Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions