Skip to content
A PyTorch port of Google TensorFlow.js PoseNet (Real-time Human Pose Estimation)
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
posenet fixed html encoding error Aug 13, 2019
LICENSE.txt Initial code commit Jan 4, 2019
NOTICE.txt Initial code commit Jan 4, 2019 Update Aug 14, 2019 Dynamic input image size Jan 14, 2019 Add benchmark app and update README Jan 4, 2019 Dynamic input image size Jan 14, 2019 Dynamic input image size Jan 14, 2019

PoseNet Pytorch

This repository contains a PyTorch implementation (multi-pose only) of the Google TensorFlow.js Posenet model.

This port is based on my Tensorflow Python ( conversion of the same model. An additional step of the algorithm was performed on the GPU in this implementation so it is faster and consumes less CPU (but more GPU). On a GTX 1080 Ti (or better) it can run over 130fps.

Further optimization is possible as the MobileNet base models have a throughput of 200-300 fps.


A suitable Python 3.x environment with a recent version of PyTorch is required. Development and testing was done with Python 3.7.1 and PyTorch 1.0 w/ CUDA10 from Conda.

If you want to use the webcam demo, a pip version of opencv (pip install python-opencv= is required instead of the conda version. Anaconda's default opencv does not include ffpmeg/VideoCapture support. The python bindings for OpenCV 4.0 currently have a broken impl of drawKeypoints so please force install a 3.4.x version.

A fresh conda Python 3.6/3.7 environment with the following installs should suffice:

conda install -c pytorch pytorch cudatoolkit
pip install requests opencv-python==


There are three demo apps in the root that utilize the PoseNet model. They are very basic and could definitely be improved.

The first time these apps are run (or the library is used) model weights will be downloaded from the TensorFlow.js version and converted on the fly.

For all demos, the model can be specified with the '--model` argument by using its integer depth multiplier (50, 75, 100, 101). The default is the 101 model.

Image demo runs inference on an input folder of images and outputs those images with the keypoints and skeleton overlayed.

python --model 101 --image_dir ./images --output_dir ./output

A folder of suitable test images can be downloaded by first running the script.

A minimal performance benchmark based on image_demo. Images in --image_dir are pre-loaded and inference is run --num_images times with no drawing and no text output.

The webcam demo uses OpenCV to capture images from a connected webcam. The result is overlayed with the keypoints and skeletons and rendered to the screen. The default args for the webcam_demo assume device_id=0 for the camera and that 1280x720 resolution is possible.


The original model, weights, code, etc. was created by Google and can be found at

This port and my work is in no way related to Google.

The Python conversion code that started me on my way was adapted from the CoreML port at

TODO (someday, maybe)

  • More stringent verification of correctness against the original implementation
  • Performance improvements (especially edge loops in '')
  • OpenGL rendering/drawing
  • Comment interfaces, tensor dimensions, etc
  • Implement batch inference for image_demo
  • Create a training routine and add models with more advanced CNN backbones
You can’t perform that action at this time.