-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run multi_adapter_rl_v2.py with multiple gpus error #820
Comments
Hi @ASY246 raw_rewards = ppo_trainer.model.compute_reward_score(**inputs) with: raw_rewards = ppo_trainer.accelerator.unwrap_model(model).compute_reward_score(**inputs) And let me know if this works? |
@younesbelkada Thanks for your response, I replace the line with
It seems like some modules are not included in the loss calculation, but why it works when I run the script by single GPU and only appears in the multiple GPU mode? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
In the examples folder, I run the script multi_adapter_rl_v2.py successfully with single GPU by
python3 multi_adapter_rl_v2.py.
But when I run with multiple GPUs by
accelerate launch --config_file=../accelerate_configs/multi_gpu.yaml multi_adapter_rl_v2.py
.I meet the following errors:
So maybe the model warp is changed with multiple GPUs mode?
Here is the python script I used
The text was updated successfully, but these errors were encountered: