Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

6 Gb of GPU. Pose Detection using cudNN and low net resolution #1946

Open
ppueyor opened this issue May 17, 2021 · 8 comments
Open

6 Gb of GPU. Pose Detection using cudNN and low net resolution #1946

ppueyor opened this issue May 17, 2021 · 8 comments

Comments

@ppueyor
Copy link

ppueyor commented May 17, 2021

Issue Summary

When using OpenPose for pose detection, with cudNN and the lowest available net resolution (320x176) the usage of my GPU is 5968MiB which limits the usage of it for other things.

I think I installed CudNN and CUDA properly (as nvidia-smi and the compiler of OpenPose states).

Is this a regular usage of the memory?
Are there other tips I can implement to reduce the usage?

I'm using the wrapper of C++, no more usages of GPU in my program.

Type of Issue

  • Question

Your System Configuration

  1. Whole console output (if errors appeared), paste the error to PasteBin and then paste the link here: https://pastebin.com/U24Wk5qb

  2. OpenPose version: 1.7.0

  3. General configuration:

    • Installation mode: Cmake ubuntu
    • Operating system (lsb_release -a in Ubuntu):
    • Operating system version (e.g., Ubuntu 16, Windows 10, ...): Ubuntu 20
    • Release or Debug mode? (by default: release):
    • Compiler (gcc --version in Ubuntu or VS version in Windows): 5.4.0, ... (Ubuntu); VS2015 Enterprise Update 3, VS2017 community, ... (Windows); ...?
  4. CUDA

    • CUDA version (cat /usr/local/cuda/version.txt in most cases): 11.2
    • cuDNN version: 8.2
    • GPU model (nvidia-smi in Ubuntu): GeForce GTX 1070
@ravijo
Copy link
Contributor

ravijo commented May 20, 2021

@ppueyor

The console output provided in PasteBin shows the output of the nvidia-smi command. I can not find OpenPose over there instead a custom project called hello_drone is consuming almost all of the memory.

To simplify the debugging process, I suggest you to run standalone OpenPose with an example video file as shown below:

# Ubuntu
./build/examples/openpose/openpose.bin --video examples/media/video.avi

Please monitor GPU usage during the above execution. It will consume a little amount of memory.

Furthermore, I suspect if you are not using the OpenPose wrapper properly which is eating so much memory. If so, I highly recommend you to check out ros_openpose. It is a ROS wrapper for OpenPose.

@ppueyor
Copy link
Author

ppueyor commented May 24, 2021

Hi, thanks for the response.

I ran the given command, and it even consumes more memory, 7 Gb.

https://pastebin.com/FmUFRxrD

I think I installed it properly, following the tutorial and using CUDA.

@ravijo
Copy link
Contributor

ravijo commented May 25, 2021

@ppueyor

Thank you for sharing the information. At this stage, I am not so sure about the huge memory consumption. I have used OpenPose with GeForce GTX 10XX series and found it normal.

Maybe the support team from OpenPose can help you out further.

@ppueyor
Copy link
Author

ppueyor commented Jun 9, 2021

How much memory does it consume in your case?

@ravijo
Copy link
Contributor

ravijo commented Jun 10, 2021

@ppueyor

I don't remember the exact value. I was using GeForce GTX 1080 at that time. OpenPose ran smoothly (with face + hand tracking enabled). In fact, I was also using OpenPose inside ROS without any memory issues.

Although OpenPose documentation recommends having a GPU with at least 4 GB of memory. However, in your case something suspicious is happening.

Well, OpenPose uses the Caffe framework inside. In your free time, you can check if the culprit is Caffe!!! Please make sure to use CMU-Perceptual-Computing-Lab/caffe a fork tweaked for OpenPose.

@zhuwei-jim
Copy link

When my pc is ubuntu20, cuda11.4, run openpose1.7, used 6Gb GPU memory too. But in ubuntu16, cuda10.1, the gpu memory used is 3Gb. My GPU is GTX1080.

@stale
Copy link

stale bot commented Jan 9, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale/old label Jan 9, 2022
@sswam
Copy link

sswam commented Apr 12, 2023

Mine is using 18 to 19GB of VRAM on my 3090, even when running on low resolution images. I built it against cudnn 8.9.0.131-1+cuda11.8 on Debian bookworm. Not sure how to troubleshoot this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants