Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AP value for Crowd Human #35

Closed
lazrak-mouad opened this issue May 26, 2020 · 5 comments
Closed

AP value for Crowd Human #35

lazrak-mouad opened this issue May 26, 2020 · 5 comments

Comments

@lazrak-mouad
Copy link

lazrak-mouad commented May 26, 2020

Hi, thank you for this great work.

Sorry to bother you again but I have an issue when calculating the AP value for CrowdHuman dataset using epoch_19.pth.stu pretrained model (CrowdHuman 2). The AP value I got is 12.40 and it's so far away from the one you claimed on the repo (84.1).

Would you share with us, if it's possible, the programme or the way used to get this value.

Thank you in advance.

@hasanirtiza
Copy link
Owner

Can you paste the full command you used to test this model ?

@lazrak-mouad
Copy link
Author

Yes of course,

The full command : python tools/test_crowdhuman.py configs/elephant/crowdhuman/cascade_hrnet.py models_pretrained/epoch_ 19 20 --out result.json

PS-1 : to avoid that the programme sleeps, I created an epoch_20.pth.stu which is a copy of epoch_19.pth.stu.

PS-2 : I've reduced the number of workers to 0 to avoid shm surcharging.

Results :
fpp: 0.01, score: 0.9979423880577087
fpp: 0.0178, score: 0.9969077706336975
fpp: 0.0316, score: 0.9949955940246582
fpp: 0.0562, score: 0.9920675754547119
fpp: 0.1, score: 0.9861095547676086
fpp: 0.1778, score: 0.9754815697669983
fpp: 0.3162, score: 0.9545246362686157
fpp: 0.5623, score: 0.9163613319396973
fpp: 1.0, score: 0.8399631381034851
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
[0.5591202939538293, 0.5591202939538293, 0.5591202939538293, 0.5591202939538293]
Checkpoint 19: [Reasonable: 55.91%], [Bare: 55.91%], [Partial: 55.91%], [Heavy: 55.91%]

PS-3 : The AP = 12.4 I've got it by using an other repo, not the official test file.

@hasanirtiza
Copy link
Owner

You need to run test.py or look at the end of README.md, regarding on how to run test for CrowdHuman on :

./tools/test.py configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/epoch_19.pth.stu 8 --out CrowdHuman12.pkl --eval bbox

or this for mgpus:

./tools/dist_test.sh configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/epoch_19.pth.stu 8 --out CrowdHuman12.pkl --eval bbox

@lazrak-mouad
Copy link
Author

Thank you so much for the guidelines.

After running the following command : Pedestron# ./tools/dist_test.sh configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/epoch_19.pth.stu 1 --out CrowdHuman12.pkl --eval bbox , I've got the following results :

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.536
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.840
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.575
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.421
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.534
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.561
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.035
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.278
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.627
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.560
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.621
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.645

Is there an explanation for the multiple values for AP and AR ?

Thank you in advance.

@hasanirtiza
Copy link
Owner

Read coco evaluation protocol in detail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants