Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typo: lcueve.out->lcurve.out #1077

Merged
merged 1 commit into from
Sep 1, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion deepmd/utils/argcheck.py
Original file line number Diff line number Diff line change
Expand Up @@ -581,7 +581,7 @@ def training_args(): # ! modified by Ziyao: data configuration isolated.
arg_validation_data,
Argument("numb_steps", int, optional=False, doc=doc_numb_steps, alias=["stop_batch"]),
Argument("seed", [int,None], optional=True, doc=doc_seed),
Argument("disp_file", str, optional=True, default='lcueve.out', doc=doc_disp_file),
Argument("disp_file", str, optional=True, default='lcurve.out', doc=doc_disp_file),
Argument("disp_freq", int, optional=True, default=1000, doc=doc_disp_freq),
Argument("numb_test", [list,int,str], optional=True, default=1, doc=doc_numb_test),
Argument("save_freq", int, optional=True, default=1000, doc=doc_save_freq),
Expand Down
26 changes: 13 additions & 13 deletions doc/train-input-auto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -898,7 +898,7 @@ model:
.. _`model/fitting_net[polar]/scale`:

scale:
| type: ``list`` | ``float``, optional, default: ``1.0``
| type: ``float`` | ``list``, optional, default: ``1.0``
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@y1xiaoc Is there some different behaviors to sort type?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dargs use a set to record dtypes. The order may change from 3.6 to 3.7 since the newer python guarantees that the iteration order is same as the add order.

| argument path: ``model/fitting_net[polar]/scale``

The output of the fitting net (polarizability matrix) will be scaled by ``scale``
Expand Down Expand Up @@ -1102,71 +1102,71 @@ loss:
.. _`loss[ener]/start_pref_e`:

start_pref_e:
| type: ``int`` | ``float``, optional, default: ``0.02``
| type: ``float`` | ``int``, optional, default: ``0.02``
| argument path: ``loss[ener]/start_pref_e``

The prefactor of energy loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the energy label should be provided by file energy.npy in each data system. If both start_pref_energy and limit_pref_energy are set to 0, then the energy will be ignored.

.. _`loss[ener]/limit_pref_e`:

limit_pref_e:
| type: ``int`` | ``float``, optional, default: ``1.0``
| type: ``float`` | ``int``, optional, default: ``1.0``
| argument path: ``loss[ener]/limit_pref_e``

The prefactor of energy loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.

.. _`loss[ener]/start_pref_f`:

start_pref_f:
| type: ``int`` | ``float``, optional, default: ``1000``
| type: ``float`` | ``int``, optional, default: ``1000``
| argument path: ``loss[ener]/start_pref_f``

The prefactor of force loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the force label should be provided by file force.npy in each data system. If both start_pref_force and limit_pref_force are set to 0, then the force will be ignored.

.. _`loss[ener]/limit_pref_f`:

limit_pref_f:
| type: ``int`` | ``float``, optional, default: ``1.0``
| type: ``float`` | ``int``, optional, default: ``1.0``
| argument path: ``loss[ener]/limit_pref_f``

The prefactor of force loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.

.. _`loss[ener]/start_pref_v`:

start_pref_v:
| type: ``int`` | ``float``, optional, default: ``0.0``
| type: ``float`` | ``int``, optional, default: ``0.0``
| argument path: ``loss[ener]/start_pref_v``

The prefactor of virial loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the virial label should be provided by file virial.npy in each data system. If both start_pref_virial and limit_pref_virial are set to 0, then the virial will be ignored.

.. _`loss[ener]/limit_pref_v`:

limit_pref_v:
| type: ``int`` | ``float``, optional, default: ``0.0``
| type: ``float`` | ``int``, optional, default: ``0.0``
| argument path: ``loss[ener]/limit_pref_v``

The prefactor of virial loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.

.. _`loss[ener]/start_pref_ae`:

start_pref_ae:
| type: ``int`` | ``float``, optional, default: ``0.0``
| type: ``float`` | ``int``, optional, default: ``0.0``
| argument path: ``loss[ener]/start_pref_ae``

The prefactor of atom_ener loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_ener label should be provided by file atom_ener.npy in each data system. If both start_pref_atom_ener and limit_pref_atom_ener are set to 0, then the atom_ener will be ignored.

.. _`loss[ener]/limit_pref_ae`:

limit_pref_ae:
| type: ``int`` | ``float``, optional, default: ``0.0``
| type: ``float`` | ``int``, optional, default: ``0.0``
| argument path: ``loss[ener]/limit_pref_ae``

The prefactor of atom_ener loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.

.. _`loss[ener]/relative_f`:

relative_f:
| type: ``NoneType`` | ``float``, optional
| type: ``float`` | ``NoneType``, optional
| argument path: ``loss[ener]/relative_f``

If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by `relative_f`, i.e. DF_i / ( || F || + relative_f ) with DF denoting the difference between prediction and label and || F || denoting the L2 norm of the label.
Expand All @@ -1179,15 +1179,15 @@ loss:
.. _`loss[tensor]/pref`:

pref:
| type: ``int`` | ``float``
| type: ``float`` | ``int``
| argument path: ``loss[tensor]/pref``

The prefactor of the weight of global loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to global label, i.e. 'polarizability.npy` or `dipole.npy`, whose shape should be #frames x [9 or 3]. If it's larger than 0.0, this npy should be included.

.. _`loss[tensor]/pref_atomic`:

pref_atomic:
| type: ``int`` | ``float``
| type: ``float`` | ``int``
| argument path: ``loss[tensor]/pref_atomic``

The prefactor of the weight of atomic loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to atomic label, i.e. `atomic_polarizability.npy` or `atomic_dipole.npy`, whose shape should be #frames x ([9 or 3] x #selected atoms). If it's larger than 0.0, this npy should be included. Both `pref` and `pref_atomic` should be provided, and either can be set to 0.0.
Expand Down Expand Up @@ -1408,7 +1408,7 @@ training:
.. _`training/disp_file`:

disp_file:
| type: ``str``, optional, default: ``lcueve.out``
| type: ``str``, optional, default: ``lcurve.out``
| argument path: ``training/disp_file``

The file for printing learning curve.
Expand Down