Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to reduce GPU memory cost when inference on one camera? #89

Closed
XiangjiBU opened this issue Jun 16, 2020 · 5 comments
Closed

how to reduce GPU memory cost when inference on one camera? #89

XiangjiBU opened this issue Jun 16, 2020 · 5 comments
Labels
question Further information is requested Stale Stale and schedule for closing soon

Comments

@XiangjiBU
Copy link

hi, I run demo using "python detect.py --source 0 --weights weights/yolov5s.pt", the performance is good, but I find GPU memory cost is more than 4G.
I replace line 40 in detect.py by "dataset = LoadWebcam(source, img_size=imgsz)" . But GPU memory cost didn't reduce, Could you give a method to reduce GPU memory cost when only use one camera? thanks!

@XiangjiBU XiangjiBU added the bug Something isn't working label Jun 16, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Jun 16, 2020

Hello @noahbu, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

@noahbu I transitioned the repo to use FP16 inference by default now when running test.py or detect.py. This might help. git pull to get the latest.

And don't add bug labels to issues that are not bugs in the future.

@glenn-jocher glenn-jocher added question Further information is requested and removed bug Something isn't working labels Jun 16, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Aug 1, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the Stale Stale and schedule for closing soon label Aug 1, 2020
@github-actions github-actions bot closed this as completed Aug 9, 2020
@zhou-huan-1
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@glenn-jocher
Copy link
Member

Hello! 👋 It looks like this issue has been marked as stale due to inactivity. If there are still concerns or additional details to add, please update the thread to keep it open. Otherwise, it will be automatically closed. Thanks for your understanding and contributions! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

3 participants