Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process group init fails when training YOLOv8 after successful tunning [Databricks] [single node GPU] #13833

Closed
1 of 2 tasks
lbeaucourt opened this issue Jun 20, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@lbeaucourt
Copy link

lbeaucourt commented Jun 20, 2024

Search before asking

  • I have searched the YOLOv8 issues and found no similar bug report.

YOLOv8 Component

Train

Bug

ValueError: Default process group has not been initialized, please make sure to call init_process_group.

File , line 1
----> 1 model.train(data=data_path.replace('dbfs:', '/dbfs') + 'data.yaml', name='yolov8m_seg_train_after_tune', epochs=3, optimizer="AdamW", device=0,
2 cfg="/best_hyperparameters.yaml")
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/ultralytics/engine/model.py:674, in Model.train(self, trainer, **kwargs)
671 pass
673 self.trainer.hub_session = self.session # attach optional HUB session
--> 674 self.trainer.train()
675 # Update model and cfg after training
676 if RANK in {-1, 0}:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/ultralytics/engine/trainer.py:199, in BaseTrainer.train(self)
196 ddp_cleanup(self, str(file))
198 else:
--> 199 self._do_train(world_size)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/ultralytics/engine/trainer.py:318, in BaseTrainer._do_train(self, world_size)
316 if world_size > 1:
317 self._setup_ddp(world_size)
--> 318 self._setup_train(world_size)
320 nb = len(self.train_loader) # number of batches
321 nw = max(round(self.args.warmup_epochs * nb), 100) if self.args.warmup_epochs > 0 else -1 # warmup iterations
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/ultralytics/engine/trainer.py:282, in BaseTrainer._setup_train(self, world_size)
280 # Dataloaders
281 batch_size = self.batch_size // max(world_size, 1)
--> 282 self.train_loader = self.get_dataloader(self.trainset, batch_size=batch_size, rank=RANK, mode="train")
283 if RANK in {-1, 0}:
284 # Note: When training DOTA dataset, double batch size could get OOM on images with >2000 objects.
285 self.test_loader = self.get_dataloader(
286 self.testset, batch_size=batch_size if self.args.task == "obb" else batch_size * 2, rank=-1, mode="val"
287 )
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/ultralytics/models/yolo/detect/train.py:55, in DetectionTrainer.get_dataloader(self, dataset_path, batch_size, rank, mode)
53 shuffle = False
54 workers = self.args.workers if mode == "train" else self.args.workers * 2
---> 55 return build_dataloader(dataset, batch_size, workers, shuffle, rank)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/ultralytics/data/build.py:132, in build_dataloader(dataset, batch, workers, shuffle, rank)
130 nd = torch.cuda.device_count() # number of CUDA devices
131 nw = min(os.cpu_count() // max(nd, 1), workers) # number of workers
--> 132 sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle)
133 generator = torch.Generator()
134 generator.manual_seed(6148914691236517205 + RANK)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/torch/utils/data/distributed.py:68, in DistributedSampler.init(self, dataset, num_replicas, rank, shuffle, seed, drop_last)
66 if not dist.is_available():
67 raise RuntimeError("Requires distributed package to be available")
---> 68 num_replicas = dist.get_world_size()
69 if rank is None:
70 if not dist.is_available():
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py:1769, in get_world_size(group)
1766 if _rank_not_in_group(group):
1767 return -1
-> 1769 return _get_group_size(group)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py:841, in _get_group_size(group)
839 """Get a given group's world size."""
840 if group is GroupMember.WORLD or group is None:
--> 841 default_pg = _get_default_group()
842 return default_pg.size()
843 return group.size()
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-e0a34767-666d-4acb-9308-27601297f4b0/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py:1008, in _get_default_group()
1006 """Get the default process group created by init_process_group."""
1007 if not is_initialized():
-> 1008 raise ValueError(
1009 "Default process group has not been initialized, "
1010 "please make sure to call init_process_group."
1011 )
1012 return not_none(GroupMember.WORLD)

Environment

Ultralytics YOLOv8.2.35 🚀 Python-3.11.0rc1 torch-2.3.1+cu121 CUDA:0 (Tesla V100-PCIE-16GB, 16151MiB)
Setup complete ✅ (6 CPUs, 104.0 GB RAM, 42.4/250.9 GB disk)

Minimal Reproducible Example

 %pip install -q -U ultralytics==8.2.35 mlflow torch
 dbutils.library.restartPython()
 
 import os
 from ultralytics import YOLO
 import torch.distributed as dist
 import torch
 
 os.environ["RANK"] = "-1"
 os.environ["WORLD_SIZE"] = "-1"
 
 token = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()
 dbutils.fs.put("file:///root/.databrickscfg","[DEFAULT]\nhost=<host>\ntoken = "+token,overwrite=True)
 
 model = YOLO('yolov8m-seg.pt')
 data_path = "data_path"
 
 model.tune(data=data_path + 'data.yaml', device=0,
            epochs=5, iterations=1, optimizer="AdamW", plots=False, save=False, val=False)
 
 model.train(data=data_path + 'data.yaml', name='yolov8m_seg_train_after_tune', epochs=3, optimizer="AdamW", device=0,
             cfg="<path>/best_hyperparameters.yaml")

Additional

Hello, I'm working on Databricks with a single node GPU cluster (Standard_NC6s_v3).

I am trying to re-train YOLOv8 on a custom dataset after adjusting the hyperparameters. I run all the commands in the same notebook/session. While the tuning works fine, the training raises an error related to the initialization of the process group (as far as I can see).

Errors occurs right after training data are scanned:

New https://pypi.org/project/ultralytics/8.2.36 available 😃 Update with 'pip install -U ultralytics'
Ultralytics YOLOv8.2.35 🚀 Python-3.11.0rc1 torch-2.3.1+cu121 CUDA:0 (Tesla V100-PCIE-16GB, 16151MiB)
Overriding model.yaml nc=80 with nc=1
Transferred 531/537 items from pretrained weights
TensorBoard: Start with 'tensorboard --logdir runs/segment/yolov8m_seg_train_after_tune6', view at http://localhost:6006/
Freezing layer 'model.22.dfl.conv.weight'
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
AMP: checks passed ✅
train: Scanning /train/labels.cache... 480 images, 41 backgrounds, 0 corrupt: 100%|██████████| 480/480 [00:00<?, ?it/s]

I've tried manually initialising the process group with 'torch.distributed.init_process_group('nccl')' but it doesn't work.
I don't understand how 'model.tune()' (where the model is trained) can work successfully when 'model.train()' failed with the same system configuration.

Thanks.

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@lbeaucourt lbeaucourt added the bug Something isn't working label Jun 20, 2024
Copy link

github-actions bot commented Jun 20, 2024

👋 Hello @lbeaucourt, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

@lbeaucourt hi there,

Thank you for providing a detailed report and the minimal reproducible example. This is very helpful! 😊

The error you're encountering, ValueError: Default process group has not been initialized, please make sure to call init_process_group, typically arises when the distributed training setup is not properly initialized.

Here are a few steps to help troubleshoot and resolve this issue:

  1. Ensure Latest Versions: First, please make sure you are using the latest versions of torch and ultralytics. You can upgrade them using:

    %pip install -U torch ultralytics
  2. Environment Variables: It seems you are setting RANK and WORLD_SIZE to -1, which indicates single-node training. However, the error suggests that the code is attempting to use distributed training. Ensure that these environment variables are correctly set before running the training:

    os.environ["RANK"] = "0"
    os.environ["WORLD_SIZE"] = "1"
  3. Manual Initialization: If you still encounter issues, try manually initializing the process group before calling model.train(). This can be done as follows:

    import torch.distributed as dist
    
    if torch.cuda.device_count() > 1:
        dist.init_process_group(backend='nccl', init_method='env://')
  4. Training Code: Here is an updated version of your code snippet incorporating the above suggestions:

    %pip install -q -U ultralytics mlflow torch
    dbutils.library.restartPython()
    
    import os
    from ultralytics import YOLO
    import torch.distributed as dist
    import torch
    
    os.environ["RANK"] = "0"
    os.environ["WORLD_SIZE"] = "1"
    
    token = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()
    dbutils.fs.put("file:///root/.databrickscfg","[DEFAULT]\nhost=<host>\ntoken = "+token,overwrite=True)
    
    model = YOLO('yolov8m-seg.pt')
    data_path = "data_path"
    
    model.tune(data=data_path + 'data.yaml', device=0,
               epochs=5, iterations=1, optimizer="AdamW", plots=False, save=False, val=False)
    
    if torch.cuda.device_count() > 1:
        dist.init_process_group(backend='nccl', init_method='env://')
    
    model.train(data=data_path + 'data.yaml', name='yolov8m_seg_train_after_tune', epochs=3, optimizer="AdamW", device=0,
                cfg="<path>/best_hyperparameters.yaml")

Please try these steps and let us know if the issue persists. Your feedback is invaluable to us, and we appreciate your patience as we work to resolve this.

@lbeaucourt
Copy link
Author

Hi @glenn-jocher , Thank you very much for this clear reply !

I tested your solution and it works fine BUT only for model.train(). I explain a bit, if a set the environment variables BEFORE model.tune() as follow

os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"

model.tune(...)

Then tunning fail with error: "Default process group has not been initialized, please make sure to call init_process_group"

But, if I keep previous env variable setting for tuning and I change the setting before training, it works !

So, thanks for your answer, it solves my problem. I still not sure to understand why behaviour is different from model.tune() to model.train() but it's not a pain point.

The final version of the code which is working for me is:

%pip install -q -U ultralytics mlflow torch
dbutils.library.restartPython()

import os
from ultralytics import YOLO
import torch.distributed as dist
import torch

os.environ["RANK"] = "-1"
os.environ["WORLD_SIZE"] = "-1"

token = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()
dbutils.fs.put("file:///root/.databrickscfg","[DEFAULT]\nhost=<host>\ntoken = "+token,overwrite=True)

model = YOLO('yolov8m-seg.pt')
data_path = "data_path"

model.tune(data=data_path + 'data.yaml', device=0,
           epochs=5, iterations=1, optimizer="AdamW", plots=False, save=False, val=False)

os.environ["RANK"] = "0"
os.environ["WORLD_SIZE"] = "1"
if torch.cuda.device_count() > 1:
    dist.init_process_group(backend='nccl', init_method='env://')

model.train(data=data_path + 'data.yaml', name='yolov8m_seg_train_after_tune', epochs=3, optimizer="AdamW", device=0,
            cfg="<path>/best_hyperparameters.yaml")

@glenn-jocher
Copy link
Member

Hi @lbeaucourt,

Thank you for the detailed follow-up and for sharing your working solution! 😊

It's great to hear that the provided solution works for model.train(). The difference in behavior between model.tune() and model.train() regarding the environment variables and process group initialization is indeed intriguing. This could be due to differences in how these methods handle distributed training under the hood.

For now, your approach of setting the environment variables before model.train() and keeping the previous settings for model.tune() seems to be a practical workaround. If you encounter any further issues or have more questions, feel free to reach out.

Happy training! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants