Skip to content

TensorFlow Lite Pose Estimation Python Implementation

Notifications You must be signed in to change notification settings

joonb14/TFLitePoseEstimation

Repository files navigation

TFLitePoseEstimation

This code snipset is heavily based on TensorFlow Lite Pose Estimation
The detection model can be downloaded from above link.
For the realtime implementation on Android look into the Android Pose Estimation Example
Follow the pose estimation.ipynb to get information about how to use the TFLite model in your Python environment.

Details

The posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite file's input takes normalized 257x257x3 shape image. And the output is composed of 4 different outputs. The 1st output contains the heatmaps, 2nd output contains the offsets, 3rd output contains the forward_displacements, 4th output contains the backward_displacements.

For model inference, we need to load, resize, typecast the image.
In my case for convenience used pillow library to load and just applied /255 for all values then cast the numpy array to float32.

Then if you follow the correct instruction provided by Google in load_and_run_a_model_in_python, you would get output in below shape

Now we need to process this output to use it for pose estimation

Extract Key points
import math

def sigmoid(x):
    return 1 / (1 + math.exp(-x))
    
height = heatmaps[0].shape[0]
width = heatmaps[0][0].shape[0]
numKeypoints = heatmaps[0][0][0].shape[0]

keypointPositions = []

for keypoint in range(numKeypoints):
    maxVal = heatmaps[0][0][0][keypoint]
    maxRow = 0
    maxCol = 0
    for row in range(height):
        for col in range(width):
            if (heatmaps[0][row][col][keypoint] > maxVal):
                maxVal = heatmaps[0][row][col][keypoint]
                maxRow = row
                maxCol = col
    keypointPositions.append([maxRow,maxCol])


confidenceScores=[]
yCoords = []
xCoords = []
for idx, position in enumerate(keypointPositions):
    positionY = keypointPositions[idx][0]
    positionX = keypointPositions[idx][1]
    yCoords.append(position[0] / (height - 1) * 257 + offsets[0][positionY][positionX][idx])
    xCoords.append(position[1] / (width - 1) * 257 + offsets[0][positionY][positionX][idx + numKeypoints])
    confidenceScores.append(sigmoid(heatmaps[0][positionY][positionX][idx]))
#     yCoords.append()
score = np.average(confidenceScores)
score
Visualize Key points and Body joints
from enum import Enum
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image


class BodyPart(Enum):
    NOSE = 0
    LEFT_EYE = 1
    RIGHT_EYE = 2
    LEFT_EAR = 3
    RIGHT_EAR= 4
    LEFT_SHOULDER = 5
    RIGHT_SHOULDER = 6
    LEFT_ELBOW = 7
    RIGHT_ELBOW = 8
    LEFT_WRIST = 9
    RIGHT_WRIST = 10
    LEFT_HIP = 11
    RIGHT_HIP = 12
    LEFT_KNEE = 13
    RIGHT_KNEE = 14
    LEFT_ANKLE = 15
    RIGHT_ANKLE = 16
  
bodyJoints = np.array(
    [(BodyPart.LEFT_WRIST, BodyPart.LEFT_ELBOW),
    (BodyPart.LEFT_ELBOW, BodyPart.LEFT_SHOULDER),
    (BodyPart.LEFT_SHOULDER, BodyPart.RIGHT_SHOULDER),
    (BodyPart.RIGHT_SHOULDER, BodyPart.RIGHT_ELBOW),
    (BodyPart.RIGHT_ELBOW, BodyPart.RIGHT_WRIST),
    (BodyPart.LEFT_SHOULDER, BodyPart.LEFT_HIP),
    (BodyPart.LEFT_HIP, BodyPart.RIGHT_HIP),
    (BodyPart.RIGHT_HIP, BodyPart.RIGHT_SHOULDER),
    (BodyPart.LEFT_HIP, BodyPart.LEFT_KNEE),
    (BodyPart.LEFT_KNEE, BodyPart.LEFT_ANKLE),
    (BodyPart.RIGHT_HIP, BodyPart.RIGHT_KNEE),
    (BodyPart.RIGHT_KNEE, BodyPart.RIGHT_ANKLE)]
)

minConfidence = 0.5

fig, ax = plt.subplots(figsize=(10,10))

if (score > minConfidence):
    ax.imshow(res_im)
    for line in bodyJoints:
        plt.plot([xCoords[line[0].value],xCoords[line[1].value]],[yCoords[line[0].value],yCoords[line[1].value]],'k-')
    ax.scatter(xCoords, yCoords, s=30,color='r')
    plt.show()


I believe you can modify the rest of the code as you want by yourself.
Thank you!