-
Notifications
You must be signed in to change notification settings - Fork 761
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] NCCL all_reduce failed with A800 when NCCL_ALGO uses Ring #1055
Comments
Some reference https://github.com/NVIDIA/nccl/issues/446 |
There is no bug. Ring, by nature, has a worse precision than say Tree, because of the order in which is performs the sums (see above bug). So if your use case is at the limit of convergence, using ring may cause higher imprecision and that could make a difference for you (when it doesn't for others). |
Hi @sjeaugey Thanks for your kindly reply! Sure. When we set |
I still have a question. For the A100 platform (which has 24 nvlink channels), why limiting From my observation, using A800 or just limiting |
To maximize bandwidth, each channel goes through a different path, and therefore performs operations in a different order. Changing the number of channels will change which offset uses which path, and could make things better or worse depending on how lucky (or unlucky) you are. |
I'm curious about this issue because we encountered the same problem before. Since the order of Ring and Tree are different, I totally understand that the reduce results may differ. However, why do we think the precision of Ring is always worse than Tree? Is there any theory to explain this? |
Ring is adding one value to the sum of n values. So if values have the same order of magnitude, the later values will be very small compared to the sum so far, meaning some of the floating point value will be ignored. With a binary tree, we add two sums of equal weight together, so the precision is better in general. |
Hi Tuvie, I found some paper or slides talking about the error analysis of floating point summation. FYI To give a simple example, such as sum |
That makes much sense. Thank both of you for your explanation. @sjeaugey @zigzagcai |
But I still have another question about the tree algorithm in one DGX with 8 GPU. According to my NCCL log, I found the Tree's topology is just like this: 1/-1/-1->0->-1
2/-1/-1->1->0
3/-1/-1->2->1
4/-1/-1->3->2
5/-1/-1->4->3
6/-1/-1->5->4
7/-1/-1->6->5
-1/-1/-1->7->6 It seems the tree is equivalent with ring (in one server), where all the right child of each node is alway -1. |
I have the same question since I have done the same experiment on one node with 8 GPU and got the same result. According to my best knowledge of NCCL (please correct me if there is any misunderstanding), the tree structure is only for inter-node. For intra-node, it's actually a chain.
Tree all_reduce is implemented with the computation pattern Reference issues: |
So it seems the question becomes: why does reducescatter in a ring has lower precision than reduce in a chain? |
Your understanding is correct. And reduce should not have a better precision than reducescatter. If you see it work better, it could just be that the reversed order works better out of random chance. |
I think reducescatter in a ring is also equivalent with a series of reduce in a chain for different chunks. For example, there is 4 ranks which are doing reducescatter for 4 chunks of data (c1, c2, c3,c4). Then it is equivalent with we do reduce for these 4 chunks respectively. The only difference is that is the for different chunks the chain order is different. |
I don’t think that experiment on one node with 8 GPU card can come to the conclusion that tree algorithm has better precision than ring algorithm. In fact, in the scenario of one node with 8 GPU, the summation order is similar between the two algorithms. The minimal reproduction steps that we provided is just to find how can we align the precision between A100/A800, and finally give some insight to solve the imprecision problem in multi-node LLaMA training scenarios on A800 platform. Much difference of imprecision will only appear in multi-node training scenarios (where tree all_reduce has better precision than ring all_reduce by nature), since the operands in tree all_reduce is more balanced than ring all_reduce. |
BTW, I don't know if there is any I guess (perhaps not right) that the reason why some models cannot converge normally on |
Hello
We found a bug about all reduce on A800 GPU when NCCL_ALGO uses Ring, and we can provide minimum reproduce steps.
We conducted comparative experiments on the A100 and A800 platforms respectively, and found that the model running on the A100 platform can converge, but the A800 platform cannot converge.
The minimum reproduce steps can be shown below:
As expected, the loss of A800 should be the same as that of A100. However, when we set the backend to
gloo
, we can obtain the same loss, but when the backend is set tonccl
, the loss output is inconsistent.Furthermore, we found that if
NCCL_ALGO=Tree
is set, the loss remains consistent. However, ifNCCL_ALGO=Ring
or is not set, the loss cannot be kept consistent between A100/A800.Additionally, when we use 8 nodes with IB connection, with one GPU card per node and set
NCCL_ALGO=Ring
, the loss can be kept consistent.Therefore, we guess that there might be a bug in the current all_reduce implementation when
NCCL_ALGO=Ring
for A800 platform, and this bug might somehow related to the number of NVLink channels.Note: A800 is a restricted version of A100 GPU. The only diference between A100/A800 is the number of NVLink channels: A100 has 24 channels; A800 has 16 channels.
The text was updated successfully, but these errors were encountered: