Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add one sentence "#SBATCH -A TG-DMR160007" #451

Closed
tfcao888666 opened this issue Jul 1, 2021 · 8 comments
Closed

add one sentence "#SBATCH -A TG-DMR160007" #451

tfcao888666 opened this issue Jul 1, 2021 · 8 comments
Labels
documentation Improvements or additions to documentation enhancement New feature or request

Comments

@tfcao888666
Copy link

Hi All,
I want add subject name "#SBATCH -A TG-DMR160007" to the .sub file, so that I could submit job. Could you tell me how to add on the machine fille. Here is my machine. Thank you!
{
"deepmd_path": "/miniconda3/bin/dp",
"train_machine": {
"batch": "slurm",
"work_path" : "/expanse/lustre/scratch/tfcao/temp_project/batis3-dp/ini",
"_comment" : "that's all"
},
"train_resources": {
"numb_node": 1,
"task_per_node":64,
"partition" : "compute",
"exclude_list" : [],
"source_list": [ "
/miniconda3/bin/activate" ],
"module_list": [ ],
"time_limit": "2:00:0",
"mem_limit": 32,
"_comment": "that's all"
},

"lmp_command":      "~/miniconda3/bin/lmp",
"model_devi_group_size":    1,
"_comment":         "model_devi on localhost",
"model_devi_machine":       {
    "batch": "slurm",
    "work_path" :   "/expanse/lustre/scratch/tfcao/temp_project/batis3-dp/ini",
    "_comment" :    "that's all"
},
"_comment": " if use GPU, numb_nodes(nn) should always be 1 ",
"_comment": " if numb_nodes(nn) = 1 multi-threading rather than mpi is assumed",
"model_devi_resources":     {
    "numb_node":    1,
    "task_per_node":64,
    "source_list":  ["~/miniconda3/bin/activate" ],
    "module_list":  [ ],
    "time_limit":   "2:00:0",
    "mem_limit":    32,
    "partition" : "compute",
    "_comment":     "that's all"
},

"_comment":         "fp on localhost ",
"fp_command":       "mpirun -np 64  /home/tfcao/vasp_bin/regular/vasp",
"fp_group_size":    1,
"fp_machine":       {
    "batch": "slurm",
    "work_path" :   "/expanse/lustre/scratch/tfcao/temp_project/batis3-dp/ini",
    "_comment" :    "that's all"
},
"fp_resources":     {
    "numb_node":    1,
    "task_per_node":64,
    "numb_gpu":     0,
    "exclude_list" : [],
    "source_list":  [],
    "module_list":  [],
    "with_mpi" : false,
    "time_limit":   "2:00:0",
    "partition" : "compute",
    "_comment":     "that's all"
},
"_comment":         " that's all "

}
~

Summary

Detailed Description

Further Information, Files, and Links

@tfcao888666 tfcao888666 added documentation Improvements or additions to documentation enhancement New feature or request labels Jul 1, 2021
@njzjz
Copy link
Member

njzjz commented Jul 2, 2021

See #367, and custom_flags is provided in #368.

@tfcao888666
Copy link
Author

tfcao888666 commented Jul 2, 2021 via email

@njzjz
Copy link
Member

njzjz commented Jul 2, 2021

The correct one should be "-A TG-DMR160007" instead of "TG-DMR160007".

@tfcao888666
Copy link
Author

tfcao888666 commented Jul 2, 2021 via email

@njzjz
Copy link
Member

njzjz commented Jul 2, 2021

You added it to model_devi_resources but you are running a fp task?

@tfcao888666
Copy link
Author

tfcao888666 commented Jul 2, 2021 via email

@njzjz
Copy link
Member

njzjz commented Jul 2, 2021

I don't understand your change... In #451 (comment), it seems that you only added it to model_devi_resources (but not fp_resources). However, you are running a fp task, right? You should add custom_flags: ["-A TG-DMR160007"] to fp_resources.

@tfcao888666
Copy link
Author

tfcao888666 commented Jul 2, 2021 via email

@njzjz njzjz closed this as completed Jul 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants