You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In generate_dataset.py, there is a --per-sequence-loss arg, which used in conversation_template.py. This parameter further adjusts the weights based on the length of each response.
I would like to know, when training the OpenChat series models, have you enabled this parameter? What is the impact of this parameter on the training results? Thanks
The text was updated successfully, but these errors were encountered:
When this parameter is enabled, losses are averaged on a per-sequence basis, otherwise on a per-token basis (same as HF trainer). It is disabled by default because it causes worse results in our experiments, making the model worse at longer responses.
In
generate_dataset.py
, there is a--per-sequence-loss
arg, which used inconversation_template.py
. This parameter further adjusts the weights based on the length of each response.openchat/ochat/config/conversation_template.py
Line 104 in 30da91b
I would like to know, when training the OpenChat series models, have you enabled this parameter? What is the impact of this parameter on the training results? Thanks
The text was updated successfully, but these errors were encountered: