Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the bounding box? #14

Closed
panda-lab opened this issue Nov 11, 2018 · 3 comments
Closed

How to get the bounding box? #14

panda-lab opened this issue Nov 11, 2018 · 3 comments

Comments

@panda-lab
Copy link

Hi,
Thanks for your work. Recently i test your code on 300W test set, include helen, aflw, ibug. when i input the test images with face bounding box based on mtcnn, i can not get a high accuracy. So could you tell me which detector do you adopt in this project. Thank You.
Siyuan

@D-X-Y
Copy link
Owner

D-X-Y commented Nov 12, 2018

Hi,

I use https://github.com/D-X-Y/SAN/blob/master/cache_data/generate_300W.py#L38 to generate the bounding box on 300W. Specifically, two kinds of the bounding box are used. One is "OD", it uses the official bounding box provided by the 300W website (https://ibug.doc.ic.ac.uk/media/uploads/competitions/bounding_boxes.zip). The other is "GT", use the tight bounding box around the facial landmarks.
I'm not surprised that using other bounding boxes would decrease the performance. Because our framework is not designed for the robustness w.r.t. different detectors. For our SAN, the detector used for training should be the same as the detector used for test.

@D-X-Y D-X-Y closed this as completed Nov 12, 2018
@CHELSEA234
Copy link

Hi @D-X-Y 👏👏👏:

I am new to this topic, maybe some questions sound silly, thanks for your patience and guidance 👍👍👍👍👍.

I have seen your answer in #issue 14. How did you get the bounding box in link? Is this the “GT”, tight bounding box? It looks like the predefined bounding box imported from mat data.

Now I need to execute your code on my own image (single), how should I locate the tight bounding box, can you share the instruction link if exists? or can you tell me how did you do on the bounding box of ../cache_data/cache/test_1.png, that looks pretty good.

Best,
XG

@cmburgul cmburgul mentioned this issue Sep 21, 2019
@cmburgul
Copy link

The facial bounding box format followed by 300W dataset is ( top-left x, top-left y, bottom right x, bottom right y). The other formats like MTCNN follows (top-left x, top-left y, width, height). I took a out of dataset image and used MTCNN to detect bounding box and changed the format and I am getting facial landmarks ish accurate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants