Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌟 [FEATURE]How to Train and Validate on Separate Datasets #378

Closed
Guackkk opened this issue Oct 24, 2023 · 2 comments
Closed

🌟 [FEATURE]How to Train and Validate on Separate Datasets #378

Guackkk opened this issue Oct 24, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@Guackkk
Copy link

Guackkk commented Oct 24, 2023

Hello!

I'm currently working on training Nequip. Is it possible to train and validate the model on different datasets? Any insights and guidance on this matter would be greatly appreciated. Thank you!

@Guackkk Guackkk added the enhancement New feature or request label Oct 24, 2023
@Linux-cpp-lisp
Copy link
Collaborator

Hi @Guackkk ,

Thanks for your interest in our code!

During training, see https://github.com/mir-group/nequip/blob/main/configs/full.yaml#L154-L157.

Otherwise, you can use nequip-evaluate to evaluate error metrics and predictions of a trained model on any test or evaluation dataset specified using our YAML syntax in a separate YAML file passed with the --dataset-config option.

@Guackkk
Copy link
Author

Guackkk commented Oct 27, 2023

Thanks for your quick and kind reply!

@Guackkk Guackkk closed this as completed Oct 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants