Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Training using init-model gives normal lcurve and bad model #3751

Open
zjgemi opened this issue May 6, 2024 · 1 comment
Open

[BUG] Training using init-model gives normal lcurve and bad model #3751

zjgemi opened this issue May 6, 2024 · 1 comment
Assignees
Labels

Comments

@zjgemi
Copy link

zjgemi commented May 6, 2024

Bug summary

In the DPGen workflow, training in iter-0 seems all right. The model trained in iter-1 (with init-model) has a large RMSE ~100meV, while the lcurve shows a better accuracy
20240506-193625
For the worst system, the RMSE increase by a factor >30 after training of iter-1.
20240506-194007
This phenomenon does not appear when using finetune (instead of init-model) in iter-1.

DeePMD-kit Version

stable-0411

Backend and its version

Pytorch

How did you download the software?

docker

Input Files, Running Commands, Error Log, etc.

iter1input.zip

Steps to Reproduce

bash aefcb166ade9f2faf80a15e8a6f0d0cb70a6d33a.sub

Further Information, Files, and Links

No response

@zjgemi zjgemi added the bug label May 6, 2024
@iProzd iProzd self-assigned this May 6, 2024
@Chengqian-Zhang
Copy link
Collaborator

I followed the steps of

  1. fine-tuning based on the multitask pre-trained model
  2. init-model based on the finetuning model obtained in step1

and found that I am able to reproduce the bug on the stable_0411 branch, but everything works well on the latest devel branch, so you can test if it's still an issue on the latest devel branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Backlog
Development

No branches or pull requests

3 participants