Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting landmark coordinates #2

Open
richliao opened this issue Feb 28, 2017 · 4 comments
Open

Getting landmark coordinates #2

richliao opened this issue Feb 28, 2017 · 4 comments

Comments

@richliao
Copy link

Dear Ziwei,

Awesome and interesting work!

Do you mind shed some lights on what is purpose of the statement -get_orig_coordinate = @(p)((p+0.5)*224-repmat([offset(2),offset(1)]',[pipline.num_points,1]))/scale;?

I can not relate this to the paper, particularly (p+0.5)*224. I don't have matlab so I won't be able to debug the value but when I run pyCaffe those values of landmarks came out of stage 1 forward are very small, same as pseudo labels (all lower than 0.01). Any explanations will be greatly appreciated! Thanks.

@yysijie
Copy link
Collaborator

yysijie commented Mar 1, 2017

Dear richliao,
Thanks for your interest in our work.
We normalized landmarks to [-0.5, 0.5] in training process.
And we used images of 224*224 for training model.
This operation is just for projecting normalized landmarks to absolute coordinate frame.

@richliao
Copy link
Author

richliao commented Mar 3, 2017

Thanks for the explanation, Ziwei!

Can you further elaborate a bit how to convert from a vector (1 dimension) to x and y coordinates which are 2 dimension? Further, I don't see any input box, are you assuming the box is the full size of the image (224x224)? Thanks much.

@yysijie
Copy link
Collaborator

yysijie commented Mar 4, 2017

the meaning of a output vector is [x1, y1, x2, y2 ....]

Yes, and the matlab script would resize images to 224*224 before testing.

@liuziwei7
Copy link
Owner

@richliao Yes, the inputs to our Deep Fashion Alignment (DFA) are clothes bounding boxes. We treat this detection and cropping procedure as pre-processing and don't include it in this codebase. But you can definitely find bounding box annotations in the DeepFashion dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants