We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我在模型微调的时候加入了代码数据集,让模型拥有不错的代码能力,在RLHF阶段训练奖励模型的时候还需要再加入代码数据集的训练吗,如果不加入会不会导致模型的代码能力下降
The text was updated successfully, but these errors were encountered:
还是说再训练完奖励模型之后的Reinforcement Learning阶段可以使用和微调一样的数据集
Sorry, something went wrong.
这应当取决于你的RM有没有对代码数据的反馈的能力?
确实,但是通过RM来对代码数据进行反馈,不太清楚能不能这样做,我看到有些垂直领域的模型貌似也不用专业领域的数据去训练RM,但最后使用了专业领域数据去进行强化学习
No branches or pull requests
我在模型微调的时候加入了代码数据集,让模型拥有不错的代码能力,在RLHF阶段训练奖励模型的时候还需要再加入代码数据集的训练吗,如果不加入会不会导致模型的代码能力下降
The text was updated successfully, but these errors were encountered: