-
Notifications
You must be signed in to change notification settings - Fork 62
Fix Chakra Errors #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Chakra Errors #185
Conversation
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
|
#160 was closed accidentally, this PR was made instead. Looks good to me! |
| # In NCCL, all-to-all communication is implemented using point-to-point | ||
| # communications. More details can be found here: | ||
| # https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/p2p.html | ||
| if "nccl:all_to_all" in keyword: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This string name nccl::all_to_all is hardcoded below:
There is a possibility that once this changes in PT, this breaks.
| # In NCCL, all-to-all communication is implemented using point-to-point | ||
| # communications. More details can be found here: | ||
| # https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/p2p.html | ||
| if "nccl:all_to_all" in keyword: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c10d calls multiple send & receive calls, but we believe this is coalesced into one ncclDevKernel_SendReceive kenel, although we are not - yet- sure exactly where
c10d::allToall https://github.com/pytorch/pytorch/blob/7c71ab1d40992ea2660bb124152e95b7a9a5119d/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L4912
c10d::collective::fn
https://github.com/pytorch/pytorch/blob/7c71ab1d40992ea2660bb124152e95b7a9a5119d/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L3353C8-L3353C32
torch::cuda::nccl::all2all
https://github.com/pytorch/pytorch/blob/7c71ab1d40992ea2660bb124152e95b7a9a5119d/torch/csrc/cuda/nccl.cpp#L965
tushar-krishna
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This first issue (All-to-All) seems to be because of coalescing of multiple send-recvs (for All to All) into a ncclDevKernel_SendRecv ..
The confusion was that the name "SendRecv" implied a single src-dst pair. However, we confirm that one "SendRecv" nccl kernel can be the result of fusing multiple send and receive messages.
tushar-krishna
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to merge!
Summary
This PR addresses multiple issues in the Chakra converter:
1. Improper Handling of NCCL All-to-All Communication
Chakra incorrectly distinguishes between point-to-point and collective communication. In NCCL, all-to-all is implemented as point-to-point communication, but Chakra's current logic treats these as distinct, leading to an incorrect type for
PyTorchNode. More details on NCCL point-to-point can be found here.2. Logging Inconsistency
There was a mismatch in logging levels: sync dependencies log via
logging.info, while other dependencies uselogging.debug. This PR resolves the inconsistency by standardizing the logging approach.3. False Positive Dependencies from HTA
HTA returns false positives for sync dependencies, leading to invalid
later op -> earlier opdependencies. This causes Chakra to fail in certain traces. The Chakra converter was found to encounter two critical failures:4. Update trace_linker to use external_id for finding GPU op's parent CPU op
There were many operations matched with wrong parent CPU during trace linking.
This PR solves this problem using
external_idinstead ofev_idx.5. Handling HTA Errors in Chakra
The trace linker was terminating unexpectedly due to errors in HTA. Although this may stem from trace inconsistencies, the issue does not occur when HTA is excluded.
Updated Chakra to handle these errors by raising exceptions instead of terminating the trace linker.
6. Proper Encoding of pg_name in Collective Operations
Identified an issue where
SendRecv,Reduce-ScatterandAll-Gatheroperations do not correctly encode pg_name following updates on the PyTorch side.Modified Chakra to ensure proper encoding of
pg_namein these collective operations.Test Plan
I tested the fixes using Mixtral 8x3B traces collected with the NeMo framework (NVIDIA).
traces_device_0.zip