Skip to content

Latest commit

 

History

History
111 lines (78 loc) · 4.87 KB

prediction.md

File metadata and controls

111 lines (78 loc) · 4.87 KB

C++ Prediction Library Installation

Note that C++ prediction library requires NVIDIA GPU acceleration. HyperPose is developed and frequently tested on Linux platforms (i.e., Ubuntu 18.04). Hence, we recommend you to build HyperPose on Linux.

Container Installation (RECOMMENDED)

To ease the installation, you can use HyperPose library in our docker image where the environment is pre-installed (including pretrained models).

Prerequisites

To test your docker environment compatibility and get related instructions:

wget https://raw.githubusercontent.com/tensorlayer/hyperpose/master/scripts/test_docker.py -qO- | python

Official Docker Image

NVIDIA docker support is required to execute our docker image.

The official image is on DockerHub.

# Pull the latest image.
docker pull tensorlayer/hyperpose

# Dive into the image’s interactive terminal. (Connect local camera and imshow window)
xhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 --entrypoint /bin/bash tensorlayer/hyperpose
# For users without a camera or X11 server. You may simply run without cameras and imshow:
# docker run --rm --gpus all -it --entrypoint /bin/bash tensorlayer/hyperpose

Note that the entry point is the hyperpose-cli binary in the build directory (i.e., /hyperpose/build/hyperpose-cli).

Build docker image from source

# Enter the repository folder.
USER_DEF_NAME=my_hyperpose
docker build -t $(USER_DEF_NAME) .
docker run --rm --gpus all $(USER_DEF_NAME)

Build From Source

Prerequisites

Packages of other versions might also work but not tested.
For Linux users, you are highly recommended to install it in a system-wide setting. You can install TensorRT7 via the [debian distributions](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) or [NVIDIA network repo ](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#maclearn-net-repo-install)(CUDA and CuDNN dependency will be automatically installed).
:class: warning

Different TensorRT version requires specific CUDA and CuDNN version. For specific CUDA and CuDNN requirements of TensorRT7, please refer to [this](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#platform-matrix).

Build on Ubuntu 18.04

# >>> Install OpenCV3+ and other dependencies. 
sudo apt -y install cmake libopencv-dev libgflags-dev
# !Note that the APT version OpenCV3.2 on Ubuntu18.04 has some trouble on Cameras Newer version is suggested.
# You are highly recommended to install OpenCV 4+ from scratch also for better performance.

# >>> Install dependencies to run the scripts in `${REPO}/scripts`
sudo apt install python3-dev python3-pip 

# >>> Install CUDA/CuDNN/TensorRT: https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian

# >>> Build HyperPose
git clone https://github.com/tensorlayer/hyperpose.git
cd hyperpose
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release && cmake --build .

Build User Codes

You can directly write codes and execute it under the hyperpose repository.

  • Step 1: Write your own codes in hyperpose/examples/user_codes with suffix .cpp.
  • Step 2:
mkdir -p build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_USER_CODES=ON   # BUILD_USER_CODES is by default "ON"
cmake --build .
  • Step 3: Execute your codes!

Go to Quick Start to test your installation.