Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

COCO OKS Metrics Usage #27

Closed
Naman-ntc opened this issue Sep 22, 2018 · 6 comments
Closed

COCO OKS Metrics Usage #27

Naman-ntc opened this issue Sep 22, 2018 · 6 comments

Comments

@Naman-ntc
Copy link

Hi, I am unable to understand how OKS is calculated in experiments using COCO dataset.
In the train function in lib/core/function.py you seem to call accuracy from the file lib/core/evaluate.py. But that accuracy is PCKh right? So how do you calculate OKS.

Could you please explain the steps how can I calculate OKS given I use your dataloader?? Thanks alot in advance!!!

@Naman-ntc Naman-ntc changed the title COCO Metrics Usage COCO OKS Metrics Usage Sep 22, 2018
@leoxiaobin
Copy link
Contributor

We use a simple PCKh metric to evaluate our training procedure, and we only use OKS for validation procedure, you can look into the code at https://github.com/Microsoft/human-pose-estimation.pytorch/blob/d69ed56bdbc1f16a288921e302c87fcb33554e37/lib/dataset/coco.py#L273.

@Naman-ntc
Copy link
Author

Naman-ntc commented Sep 23, 2018

Does this piece of code perform the validation:
https://github.com/Microsoft/human-pose-estimation.pytorch/blob/d69ed56bdbc1f16a288921e302c87fcb33554e37/lib/core/function.py#L180-L182
I see that you perform validation on the whole dataset at once after concatenating the all_boxes and all_preds for complete dataset!
Is it not possible to compute OKS individually for sample? (Sorry if this is dumb question, I haven't really understood OKS properly yet)

@leoxiaobin
Copy link
Contributor

Yes, the code is for OKS evaluation. You can also compute OKS for any number of sample. I designed it like this, because that I want use a simple PKCh metric to track the training procedure for any dataset. And for different dataset, it has its own evaluate metric, for example, for MPII using PCKh@0.5, for COCO using OKS.

@Naman-ntc
Copy link
Author

Yeah i finally got my hands dirty with it. I was able to implement and train stacked hourglass with it. Thanks for putting the wonderful code and such great response

@hengck23
Copy link

hengck23 commented Jan 7, 2019

it seems that for pckh, you are using reference_size =[0.1H, 0.1W] in your code.

distance_x/(0.1H)
distance_y/(0.1
W)

the reference_size for x,y seems to be flipped by mistake?


see evaluate.py:

def accuracy(output, target, hm_type='gaussian', thr=0.5):

... norm = np.ones((pred.shape[0], 2)) * np.array([h, w]) / 10

@YuQi9797
Copy link

Is there a handwritten version of OKS here? Instead of calling the API .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants