Skip to content

[BUG] AssertionError: Distributed backend is not initialized. #2168

@chinoll

Description

@chinoll

Describe the bug
A clear and concise description of what the bug is.
AssertionError: Distributed backend is not initialized. Please set dist_init_required to True or initialize before calling deepspeed.initialize()

Expected behavior
A clear and concise description of what you expected to happen.
Run the training process.

ds_report output
Please run ds_report to give us details about your setup.

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
 [WARNING]  please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/xxx/anaconda3/envs/envs/lib/python3.10/site-packages/torch']
torch version .................... 1.12.0
torch cuda version ............... 11.3
torch hip version ................ None
nvcc version ..................... 11.3
deepspeed install path ........... ['/home/xxx/anaconda3/envs/envs/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.7.0, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.12, cuda 11.3

System info (please complete the following information):

  • OS: Ubuntu 20.04.4 LTS x86_64
  • GPU count and types: 2xA100 80G PCIE
  • Interconnects (if applicable):one machine
  • Python version:3.10.4
  • Any other relevant info about your setup

Launcher context
Are you launching your experiment with the deepspeed launcher, MPI, or something else?
deepspeed

Docker context
Are you using a specific docker image that you can share?
Not using docker

Additional context
Add any other context about the problem here.

  File "/home/xxx/yyy/training.py", line 297, in setup_model_and_optimizer
    model, optimizer, _, lr_scheduler = deepspeed.initialize(
  File "/home/xxx/anaconda3/envs/NLP/lib/python3.10/site-packages/deepspeed/__init__.py", line 121, in initialize
    engine = DeepSpeedEngine(args=args,
  File "/home/xxx/anaconda3/envs/NLP/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 241, in __init__
    dist.init_distributed(dist_backend=self.dist_backend,
  File "/home/xxx/anaconda3/envs/NLP/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 399, in init_distributed
    assert (

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions