Skip to content

Commit

Permalink
support pose_templates
Browse files Browse the repository at this point in the history
  • Loading branch information
liruilong940607 committed Apr 8, 2019
1 parent 1262f9b commit 28b93fe
Show file tree
Hide file tree
Showing 5 changed files with 484 additions and 35 deletions.
58 changes: 27 additions & 31 deletions README.md
@@ -1,6 +1,6 @@
# Pose2Seg

*Official* code for the paper "Pose2Seg: Detection Free Human Instance Segmentation"[[ProjectPage]](http://www.liruilong.cn/Pose2Seg/index.html)[[arXiv]](https://arxiv.org/abs/1803.10683) @ CVPR2019.
*Official* code for the paper "Pose2Seg: Detection Free Human Instance Segmentation"[ProjectPage](http://www.liruilong.cn/Pose2Seg/index.html)[arXiv](https://arxiv.org/abs/1803.10683) @ CVPR2019.

The *OCHuman dataset* proposed in our paper is released [here](https://github.com/liruilong940607/OCHumanApi)

Expand All @@ -9,7 +9,6 @@ The *OCHuman dataset* proposed in our paper is released [here](https://github.co
<p> Pipeline of our pose-based instance segmentation framework.</p>
</div>


## Setup environment

``` bash
Expand All @@ -21,39 +20,28 @@ python setup.py build_ext install
cd -
```

## Download data

- COCO 2017
- [COCO 2017 Train images [118K/18GB]](http://images.cocodataset.org/zips/train2017.zip)
- [COCO 2017 Val images [5K/1GB]](http://images.cocodataset.org/zips/val2017.zip)
- [COCOPersons Train Annotation (person_keypoints_train2017_pose2seg.json) [166MB]](https://github.com/liruilong940607/Pose2Seg/releases/download/data/person_keypoints_train2017_pose2seg.json)
- [COCOPersons Val Annotation (person_keypoints_val2017_pose2seg.json) [7MB]](https://github.com/liruilong940607/Pose2Seg/releases/download/data/person_keypoints_val2017_pose2seg.json)

- OCHuman
- [images [667MB] & annotations](https://cg.cs.tsinghua.edu.cn/dataset/form.html?dataset=ochuman)

**Note**:
`person_keypoints_(train/val)2017_pose2seg.json` is a subset of `person_keypoints_(train/val)2017.json` (in [COCO2017 Train/Val annotations](http://images.cocodataset.org/annotations/annotations_trainval2017.zip)). We choose those instances with both keypoint and segmentation annotations for our experiments.

## Setup data

The `data` folder should be like this:

data
├── coco2017
│   ├── annotations
│   │   ├── person_keypoints_train2017_pose2seg.json
│   │   ├── person_keypoints_val2017_pose2seg.json
│   ├── train2017
│   │   ├── ####.jpg
│   ├── val2017
│   │   ├── ####.jpg
├── OCHuman
│   ├── annotations
│   │   ├── ochuman_coco_format_test_range_0.00_1.00.json
│   │   ├── ochuman_coco_format_val_range_0.00_1.00.json
│   ├── images
│   │   ├── ####.jpg
data
| -- coco2017 ([link](http://cocodataset.org/))
| -- annotations
| -- person_keypoints_train2017_pose2seg.json ([download](https://github.com/liruilong940607/Pose2Seg/releases/download/data/person_keypoints_train2017_pose2seg.json))
| -- person_keypoints_val2017_pose2seg.json ([download](https://github.com/liruilong940607/Pose2Seg/releases/download/data/person_keypoints_val2017_pose2seg.json))
| -- train2017
| -- ####.jpg
| -- val2017
| -- ####.jpg
| -- OCHuman ([link](https://github.com/liruilong940607/OCHumanApi)) [(download)](([download](https://cg.cs.tsinghua.edu.cn/dataset/form.html?dataset=ochuman)))
| -- annotations
| -- ochuman_coco_format_test_range_0.00_1.00.json
| -- ochuman_coco_format_val_range_0.00_1.00.json
| -- images
| -- ####.jpg

**Note**:
`person_keypoints_(train/val)2017_pose2seg.json` is a subset of `person_keypoints_(train/val)2017.json` (in [COCO2017 Train/Val annotations](http://images.cocodataset.org/annotations/annotations_trainval2017.zip)). We choose those instances with both keypoint and segmentation annotations for our experiments.

## How to train

Expand All @@ -71,6 +59,14 @@ This allows you to test the model on (1) COCOPersons val set and (2) OCHuman val
python test.py --weights last.pkl --coco --OCHuman
```

## About Pose-templates

<div align="center">
<img src="figures/pose_templates.png" width="500px"/>
<p> Pose templates clustered using K-means on COCO.</p>
</div>

This repo already contains a template file `modeling/templates.json` which was used in our paper. But you are free to explore different cluster parameters as discussed in our paper. See [visualize_cluster.ipynb](visualize_cluster.ipynb) for an example.



Expand Down
160 changes: 160 additions & 0 deletions cluster_pose.py
@@ -0,0 +1,160 @@
import json
import numpy as np
from tqdm import tqdm
from scipy.cluster import vq
import cv2
import matplotlib.pyplot as plt

from datasets.CocoDatasetInfo import CocoDatasetInfo
from lib.transforms import get_cropalign_matrix, warpAffinePoints

def draw_skeleton(normed_kpts, h=200, w=200, vis_threshold=0, is_normed=True, returnimg=False):
origin_connections = [[16,14],[14,12],[17,15],[15,13],[12,13],
[6,12],[7,13],[6,7],[6,8],[7,9],[8,10],
[9,11],[2,3],[1,2],[1,3],[2,4],[3,5],[4,6],[5,7]]
img = np.zeros((int(h), int(w)), dtype=np.float32)
kptsv = normed_kpts.copy()
if is_normed:
kptsv[:, 0] *= w
kptsv[:, 1] *= h
kptsv = np.int32(kptsv)

for kptv in kptsv:
if kptv[-1] > vis_threshold:
cv2.circle(img, (kptv[0], kptv[1]), 4, (255, 0, 0), -1)
idx = 15
cv2.circle(img, (kptsv[idx][0], kptsv[idx][1]), 10, (0, 0, 255), -1)
for conn in origin_connections:
if kptsv[conn[0] - 1][-1] > vis_threshold and kptsv[conn[1] - 1][-1] > vis_threshold:
p1, p2 = kptsv[conn[0] - 1], kptsv[conn[1] - 1]
cv2.line(img, (p1[0], p1[1]), (p2[0], p2[1]), (255, 0, 0), 2)

if returnimg:
return img
else:
plt.imshow(img)
plt.show()

def norm_kpt_by_box(kpts, boxes, keep_ratio=True):
normed_kpts = np.array(kpts).copy()
normed_kpts = np.float32(normed_kpts)

for i, (kpt, box) in enumerate(zip(kpts, boxes)):
H = get_cropalign_matrix(box, 1.0, 1.0, keep_ratio)
normed_kpts[i, :, 0:2] = warpAffinePoints(kpt[:, 0:2], H)

inds = np.where(normed_kpts[:, :, 2] == 0)
normed_kpts[inds[0], inds[1], :] = 0
return normed_kpts

def cluster_zixi(kpts, cat_num):
# kpts: center-normalized (N, 17, 3)
datas = np.array(kpts)
inds = np.where(datas[:, :, 2] == 0)
datas[inds[0], inds[1], 0:2] = 0.5

datas = datas.reshape(len(datas), -1)
res = vq.kmeans2(datas, cat_num, minit='points', iter=100)
return res

def cluster(dataset = 'coco', cat_num = 3, vis_threshold = 0.4,
minpoints = 8, save_file = './modeling/templates2.json', visualize=False):
# We try `cat_num` from 1 to 6 multiple times. we want to see
# what the cluster centers look like when vary the numbers of
# group. While the kmean method, which is heavily relay on the
# initial status, gives nearly the same cluster centers when
# `cat_num` = 3 each time. So we assume the coco dataset accurately
# have 3 clusters.(a TODO is to visualize this dataset.) And
# the visualization of the cluster centers seems to reasonable:
# (1) a full body. (2) a full body without head (3) an upper body.
# Note that (2) seems representing the backward of a person.

if dataset == 'coco':
datainfos = CocoDatasetInfo('./data/coco2017/train2017',
'./data/coco2017/annotations/person_keypoints_train2017_pose2seg.json',
loadimg=False)

connections = [[16,14],[14,12],[17,15],[15,13],[12,13],
[6,12],[7,13],[6,7],[6,8],[7,9],[8,10],
[9,11],[2,3],[1,2],[1,3],[2,4],[3,5],[4,6],[5,7]]

names = ["nose",
"left_eye","right_eye",
"left_ear","right_ear",
"left_shoulder","right_shoulder",
"left_elbow","right_elbow",
"left_wrist","right_wrist",
"left_hip","right_hip",
"left_knee","right_knee",
"left_ankle","right_ankle"]

flip_map = {'left_eye': 'right_eye',
'left_ear': 'right_ear',
'left_shoulder': 'right_shoulder',
'left_elbow': 'right_elbow',
'left_wrist': 'right_wrist',
'left_hip': 'right_hip',
'left_knee': 'right_knee',
'left_ankle': 'right_ankle'}

def flip_keypoints(keypoints, keypoint_flip_map, keypoint_coords, width):
"""Left/right flip keypoint_coords. keypoints and keypoint_flip_map are
accessible from get_keypoints().
"""
flipped_kps = keypoint_coords.copy()
for lkp, rkp in keypoint_flip_map.items():
lid = keypoints.index(lkp)
rid = keypoints.index(rkp)
flipped_kps[:, :, lid] = keypoint_coords[:, :, rid]
flipped_kps[:, :, rid] = keypoint_coords[:, :, lid]

# Flip x coordinates
flipped_kps[:, 0, :] = width - flipped_kps[:, 0, :]
# Maintain COCO convention that if visibility == 0, then x, y = 0
inds = np.where(flipped_kps[:, 2, :] == 0)
flipped_kps[inds[0], 0, inds[1]] = 0
return flipped_kps

all_kpts = []
for idx in tqdm(range(len(datainfos))):
rawdata = datainfos[idx]
gt_boxes = rawdata['boxes']
gt_kpts = rawdata['gt_keypoints'].transpose(0, 2, 1) # (N, 17, 3)
gt_ignores = rawdata['is_crowd']
normed_kpts = norm_kpt_by_box(gt_kpts, gt_boxes)
normed_kpts_flipped = flip_keypoints(names, flip_map,
normed_kpts.transpose(0, 2, 1), 1.0).transpose(0, 2, 1)
normed_kpts = np.vstack((normed_kpts, normed_kpts_flipped))
for kpt in normed_kpts:
if np.sum(kpt)==0:
continue
elif np.sum(kpt[:, 2]>0)<minpoints:
continue
else:
all_kpts.append(kpt)
all_kpts = np.array(all_kpts)
print ('data to be clustered:', all_kpts.shape)

res = cluster_zixi(all_kpts, cat_num)

save_dict = {}
save_dict['connections'] = connections
save_dict['names'] = names
save_dict['flip_map'] = flip_map
save_dict['vis_threshold'] = vis_threshold
save_dict['minpoints'] = minpoints
save_dict['templates'] = [item.tolist() for item in res[0]]
if save_file is not None:
with open(save_file, 'w') as result_file:
json.dump(save_dict, result_file)

if visualize:
for center in res[0]:
center = center.reshape(-1, 3)
draw_skeleton(center, 200, 200, vis_threshold)

print ('cluster() done.')
return res

else:
raise NotImplementedError
Binary file added figures/pose_templates.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 3 additions & 4 deletions test.py
Expand Up @@ -66,7 +66,6 @@ def do_eval_coco(image_ids, coco, results, flag):
_str += '%.3f '%value
logger(_str)

# python test.py --weights last.pkl --coco --OCHuman
if __name__=='__main__':
parser = argparse.ArgumentParser(description="Pose2Seg Testing")
parser.add_argument(
Expand All @@ -93,7 +92,7 @@ def do_eval_coco(image_ids, coco, results, flag):

print('===========> testing <===========')
if args.coco:
test(model, dataset='cocoVal') # 579
test(model, dataset='cocoVal')
if args.OCHuman:
# test(model, dataset='OCHumanVal') # 573
test(model, dataset='OCHumanTest') # 547
test(model, dataset='OCHumanVal')
test(model, dataset='OCHumanTest')

0 comments on commit 28b93fe

Please sign in to comment.