Skip to content

Improve logging for PPO + Docs page#243

Merged
lvwerra merged 7 commits intomainfrom
ppo_logging
Mar 24, 2023
Merged

Improve logging for PPO + Docs page#243
lvwerra merged 7 commits intomainfrom
ppo_logging

Conversation

@natolambert
Copy link
Copy Markdown
Contributor

I wanted to make it clearer in experiments the difference between the reward model score and the KL penalty. Added PR to do that :).

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

HuggingFaceDocBuilderDev commented Mar 22, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Copy Markdown
Contributor

@younesbelkada younesbelkada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for this!
THe proposed changes should fix the failing tests

Comment thread trl/trainer/ppo_trainer.py Outdated
@natolambert
Copy link
Copy Markdown
Contributor Author

Ah @younesbelkada the suggestion overwrote a needed variable. I'll fix it today, shouldn't be so bad :)

@natolambert
Copy link
Copy Markdown
Contributor Author

feel free to merge when ready / happy @younesbelkada @lvwerra

Copy link
Copy Markdown
Member

@lvwerra lvwerra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @natolambert!

@lvwerra lvwerra merged commit 404621f into main Mar 24, 2023
@lvwerra lvwerra deleted the ppo_logging branch March 24, 2023 08:34
yxliu-TAMU pushed a commit to mincheolseong/ECEN743-GRPO-Project-Proposal that referenced this pull request Apr 20, 2025
* init pr

* try and fix docpreview

* fix

* try to fix tests

* nit

* fix tests

* convert to tensor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants