New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. #995
Comments
Hi, thanks for your report. May I check whether you install colossalai via our download page or build from source? |
Yes,I followed completely as the document: install dependencypip install -r requirements/requirements.txt install colossalaipip install . |
I see, there are two problems.
|
Thanks, I recreated the env and the error remained I think is point 2. |
Unfortunately, it is currently not enabled for now. However, you can use |
@FrankLeeeee I tried to use zero config and the content of my config.py is:
and then initializing my model with:
Unforunately, It threw an error agian which said that:
May I ask your help again?(^_^) |
This line is no longer needed when using zero. |
But if I create my model with:
It will stop running with the info:
maybe I should give machine two model weights but I only have one. |
This seems a bit tricky. I have requested our team to investigate into this issue. For now, I will create a temporary branch to enable |
OK, Looking forward to using it! |
Hi @480284856 , I was a bit busy yesterday and have just created a new branch here. You can install Colossal-AI via the following commands. git clone https://github.com/FrankLeeeee/ColossalAI.git
cd ColossalAI
git checkout hotfix/support-torch-ddp-config
pip install -r requirements/requirements.txt
pip install -v . If CUDA Extension is not installed, it will show logs in the first few lines when doing You can configure torch DDP by adding the following to your torch_ddp = dict(
find_unused_parameters=True
)
|
Meanwhile, you can try to use the NAIVE mode of AMP as it does not use torch DDP as well. |
馃悰 Describe the bug
After following the ResNet50 example in the tutorial as soon as possible,I got the error as the title said. It is like my last usage of hf's accelerate, I can't figure out this complex problem for my first usage. Of course I have tried my best to solve it and the reasons is likely:
colossalai check -i
and its output is:Colossalai should be built with cuda extension to use the FP16 optimizer
If you want to activate cuda mode for MoE, please install with cuda_ext!
CUDA Version: N/A (CUDA_HOME is not set)
PyTorch Version: 1.11.0+cu102
CUDA Version in PyTorch Build: 10.2
PyTorch CUDA Version Match: x
CUDA Extension: x
but I tried in a machine of 11.3 CUDA and I threw a same error.
Below is part of my code:
Code of model construction
here is error info:
/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py:981: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
torch.arange(indices.shape[0] * indices.shape[1] * num_indices_to_gather, device=indices.device)
/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py:981: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
torch.arange(indices.shape[0] * indices.shape[1] * num_indices_to_gather, device=indices.device)
Traceback (most recent call last):
File "test3_v3.3.py", line 138, in
logist = engine(batch)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/colossalai/engine/_base_engine.py", line 183, in call
return self.model(*args, **kwargs)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 947, in forward
Traceback (most recent call last):
File "test3_v3.3.py", line 138, in
logist = engine(batch)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/colossalai/engine/_base_engine.py", line 183, in call
return self.model(*args, **kwargs)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 947, in forward
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
, and bymaking sure all
forward
function outputs participate in calculating loss.If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's
forward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).Parameter indices which did not receive grad for rank 0: 197 198
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument
find_unused_parameters=True
totorch.nn.parallel.DistributedDataParallel
, and bymaking sure all
forward
function outputs participate in calculating loss.If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's
forward
function. Please include the loss function and the structure of the return value offorward
of your module when reporting this issue (e.g. list, dict, iterable).Parameter indices which did not receive grad for rank 1: 197 198
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 44596) of binary: /home/guxj/anaconda3/envs/NLP_colossalai/bin/python
Traceback (most recent call last):
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/guxj/anaconda3/envs/NLP_colossalai/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
test3_v3.3.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2022-05-18_01:27:08
host : dlp01
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 44597)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-05-18_01:27:08
host : dlp01
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 44596)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Environment
CUDA: 10.2
pytorch: 1.11.0
python:3.8.13(miniconda)
The text was updated successfully, but these errors were encountered: