Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get predicted target point from result and source point #11

Open
binhmuc opened this issue Jul 1, 2019 · 9 comments
Open

How to get predicted target point from result and source point #11

binhmuc opened this issue Jul 1, 2019 · 9 comments

Comments

@binhmuc
Copy link

binhmuc commented Jul 1, 2019

Thank for your source code !
Could you tell me how to get target point from pretrained model and the source point.
I looking for "eval_pf.py" but look like you get source point from target point...

@binhmuc
Copy link
Author

binhmuc commented Jul 1, 2019

Also your paper said that "A keypoint is considered to be matched correctly if its predicted location is within a distance of α · max(h, w) of the target keypoint position". So i don't know why you code compare with source points instead of target points.

@ignacio-rocco
Copy link
Owner

ignacio-rocco commented Jul 1, 2019 via email

@binhmuc
Copy link
Author

binhmuc commented Jul 2, 2019

Thank for you reply :)
So, it means that, i just replace "source points" and "target point" in the code and got the natural result ?.
But it too weird for me...
Because in the code: You warped source images -> target images, and using theta result to get inverse warping...
So, could you tell me, how to get target point from source points and theta result ?
Thank you !

@ignacio-rocco
Copy link
Owner

Please see the explanations about inverse warping here:

https://www.cs.unc.edu/~lazebnik/research/fall08/lec08_faces.pdf

this should help you understand!

@lixiaolusunshine
Copy link

so do you understand his means? I'm also confused this opinions.

@binhmuc
Copy link
Author

binhmuc commented Sep 17, 2019

@lixiaolusunshine yes, i understood him. Clearly that, the paper said that source points to target points, but in the source code is totally inverse.

@lixiaolusunshine
Copy link

so in his paper he got the estimated inverse affine parameters from the featuregression layer, then use this inverse mapping to warp the source image into the target image?

@binhmuc
Copy link
Author

binhmuc commented Sep 18, 2019

@lixiaolusunshine sorry, i cannot catch up your mean. In his paper, very clear that, use GMM, find a list of parameters, from parameters => warp => loss.
The only difference is when he compare the result. He compare the target points, but in code, we never get target points for the parameters, instead of is source points.

@tkrtr
Copy link

tkrtr commented Jan 26, 2024

@binhmuc
Thanks for your issue.

Link about inverse points method is broken.
If you know how to do the inverse points, I would like to know.
The owner of this source code does not appear to be replying at this time.

Please see the explanations about inverse warping here:

https://www.cs.unc.edu/~lazebnik/research/fall08/lec08_faces.pdf

this should help you understand!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants