You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is not a problem for DPO, since the common prefix of these two columns contributes 0 to the loss. However, for IPO the loss is averaged across the sequence length, and with the changed data format the sequence contains all these "prompt" tokens that contribute to the averaging.
I imagine this is not intended? I also am wondering why the dataset has changed to put the "prompt" into the chosen and rejected columns.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
According to #1294, IPO needs to compute the average logp over the sequence dimension when comparing the chosen response and the rejected response.
However, the new data format in v0.8.2 (for example, see https://huggingface.co/datasets/trl-internal-testing/hh-rlhf-trl-style vs. the older https://huggingface.co/datasets/trl-internal-testing/Anthropic-hh-rlhf-processed) makes it so that only the first message in a chat is the prompt, and all the remaining messages become part of the chosen or the rejected response. This implies that the "prompt" actually appears in the chosen and rejected columns, and are the same between the two.
This is not a problem for DPO, since the common prefix of these two columns contributes 0 to the loss. However, for IPO the loss is averaged across the sequence length, and with the changed data format the sequence contains all these "prompt" tokens that contribute to the averaging.
I imagine this is not intended? I also am wondering why the dataset has changed to put the "prompt" into the chosen and rejected columns.
The text was updated successfully, but these errors were encountered: