You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 1, 2025. It is now read-only.
This is a placeholer task to make distributed training work with TorchDynamo + AOT Autograd. The main work seems to be making sure the relevant ops can be traced with AOT Autograd and are properly added to the FX graph by TorchDynamo.
I expect most of the issues will be at the AOT Autograd level, because TorchDynamo treats most torch.* ops as black box. We should test and verify this though.