Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The memory usage of the 0 card will increase until it is out of memory #13014

Open
2 tasks done
all-for-code opened this issue May 15, 2024 · 4 comments
Open
2 tasks done
Labels
bug Something isn't working

Comments

@all-for-code
Copy link

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

No response

Bug

When resume=True is set and the previous weight training model is followed, it will be found that each time val is computed, the 0 card memory usage will continue to increase until the memory is insufficient

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@all-for-code all-for-code added the bug Something isn't working label May 15, 2024
Copy link
Contributor

👋 Hello @all-for-code, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@glenn-jocher
Copy link
Member

@all-for-code hello! Thank you for raising this issue and offering to help with a PR! 😊

It seems like you're encountering a memory leak when resuming training with the resume=True setting. This issue could potentially involve PyTorch's caching mechanism or improper release of resources during validation.

To help isolate the problem, you might try clearing the cache manually by calling torch.cuda.empty_cache() at the end of each validation cycle. Alternatively, periodically reset the DataLoader for the validation data might also help control memory usage.

If these suggestions alleviate the memory issue, feel free to initiate a PR with your findings or other insights that might fix the problem. Your contributions are invaluable, and we look forward to seeing your solution! 🌟

@all-for-code
Copy link
Author

@glenn-jocher hello! It seems there is another problem here.
When I train with multiple GPUs, I noticed that during the validation phase after each training round, the memory usage of the first GPU fluctuates, while the memory usage of the other GPUs remains constant. This seems to be different from YOLOv8, where only the first GPU has memory usage during this phase, and the memory usage of the other GPUs is released.

@glenn-jocher
Copy link
Member

Hello @all-for-code! Thanks for your observation. 🌟

In YOLOv5, GPU 0 handles additional tasks like maintaining the Exponential Moving Average (EMA) and managing checkpoints, which can lead to higher memory usage compared to other GPUs. This behavior differs from YOLOv8 as you noted.

If the fluctuating memory usage on GPU 0 during validation is concerning, you might consider manually managing the memory by invoking torch.cuda.empty_cache() after validation to help stabilize the memory usage. This can be particularly useful if you're observing out-of-memory errors.

Let us know if this helps or if the issue persists!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants