Skip to content

Conversation

ronghanghu
Copy link
Collaborator

@ronghanghu ronghanghu commented May 8, 2022

Following #3511, we should also allow users to specify layout pinning in optimizer_step and reduce_gradients, e.g. if they want to use these two functions together with xm.all_gather and xm.reduce_scatter. This PR is a short-term solution to such use cases.

As a long-term solution, it would be great to allow pinning layout in all collective ops simultaneously.

cc: @JackCaoG

@JackCaoG JackCaoG self-requested a review May 9, 2022 17:39
Copy link
Collaborator

@JackCaoG JackCaoG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@JackCaoG JackCaoG merged commit 6e3992a into pytorch:master May 9, 2022
@ronghanghu ronghanghu deleted the optimizer-layout-pinning branch May 9, 2022 19:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants