Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. #586

Open
18445864529 opened this issue Nov 16, 2023 · 1 comment

Comments

@18445864529
Copy link

when trying to finetune BLIP2 with caption_coco_ft.yaml, I got the following error:

  File "/data/a/bowenz/LAVIS/lavis/tasks/base_task.py", line 222, in _train_inner_loop                                      
loss, loss_dict = self.train_step(model=model, samples=samples)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                       
File "/data/a/bowenz/LAVIS/lavis/tasks/base_task.py", line 64, in train_step                                              
output = model(samples)
             ^^^^^^^^^^^^^^                                                                                               
File "/data/a/bowenz/anaconda3/envs/lavis/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl                                                                                                              
return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                               
File "/data/a/bowenz/anaconda3/envs/lavis/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl                                                                                                                      
return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                  
File "/data/a/bowenz/anaconda3/envs/lavis/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1515, in forward                                                                                                                   
inputs, kwargs = self._pre_forward(*inputs, **kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                 
File "/data/a/bowenz/anaconda3/envs/lavis/lib/python3.11/site-packages/torch/nn/parallel/distributed.py", line 1409, in _pre_forward
    if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one.  This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.

And after setting find_unused_parameters=True and TORCH_DISTRIBUTED_DEBUG=DETAIL I got this traceback message:

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations.
Parameter at index 511 with name visual_encoder.blocks.38.mlp.fc2.weight has been marked as ready twice. This means that multiple autograd engine  hooks have fired for this particular parameter during this iteration.

Could someone please offer some idea on how I can solve this?

@18445864529
Copy link
Author

This can be temporarily solved by either disabling gradient checkpointing (but the memory requirement increases dramatically) or using singe card training. But I got another error during the single card training process:
RuntimeError: Function 'SoftmaxBackward0' returned nan values in its 0th output.
which is caused by this line attn = attn.softmax(dim=-1) in the forward function of eva_vit.py
It always happens after a certain amount of iterations (after 450/17710 in the first epoch)

I simply used the provided code and script for coco finetuning and I don't understand why I got all these errors. Could someone please help? @LiJunnan1992

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant