Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attention mask when calculating log ratio for PPO #582

Open
kmy17518 opened this issue Nov 23, 2023 · 0 comments
Open

Attention mask when calculating log ratio for PPO #582

kmy17518 opened this issue Nov 23, 2023 · 0 comments

Comments

@kmy17518
Copy link

kmy17518 commented Nov 23, 2023

Hi, I have a quesiton about calculating log ratio for PPO.
I'm very new to this area and I would be really grateful if you can help me.

In accelerate_ppo_trainer.py, def make_experience, line 457
log_ratio = (logprobs - ref_logprobs) * attention_mask[:, :-1]

but according to # NOTE: logprob[i] is (log)prob at which all_token[i+1] was sampled,
so shouldn't it be attention_mask[:, 1:] ?

in accelerate_ppo_trainer.py, def loss, line 188

logprobs, values_pred, mask = (
                logprobs[:, start:end],
                values_pred[:, start:end],
                attention_mask[:, start + 1 : end + 1],
            )

Here I think attention mask is shifted the correct way. So why is it different in def make_experience?

Thanks for your help in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant