-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting landmark coordinates #2
Comments
Dear richliao, |
Thanks for the explanation, Ziwei! Can you further elaborate a bit how to convert from a vector (1 dimension) to x and y coordinates which are 2 dimension? Further, I don't see any input box, are you assuming the box is the full size of the image (224x224)? Thanks much. |
the meaning of a output vector is [x1, y1, x2, y2 ....] Yes, and the matlab script would resize images to 224*224 before testing. |
@richliao Yes, the inputs to our Deep Fashion Alignment (DFA) are clothes bounding boxes. We treat this detection and cropping procedure as pre-processing and don't include it in this codebase. But you can definitely find bounding box annotations in the DeepFashion dataset. |
Dear Ziwei,
Awesome and interesting work!
Do you mind shed some lights on what is purpose of the statement -get_orig_coordinate = @(p)((p+0.5)*224-repmat([offset(2),offset(1)]',[pipline.num_points,1]))/scale;?
I can not relate this to the paper, particularly (p+0.5)*224. I don't have matlab so I won't be able to debug the value but when I run pyCaffe those values of landmarks came out of stage 1 forward are very small, same as pseudo labels (all lower than 0.01). Any explanations will be greatly appreciated! Thanks.
The text was updated successfully, but these errors were encountered: