Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do i try on custom test dataset? #4

Closed
jinwonkim93 opened this issue Mar 8, 2023 · 17 comments
Closed

How do i try on custom test dataset? #4

jinwonkim93 opened this issue Mar 8, 2023 · 17 comments

Comments

@jinwonkim93
Copy link

Thanks for the great work! It works perfectly. the model seems to need some other stuff (ldmk, theta) how can I get it for custom test dataset?

@ForeverFancy
Copy link
Collaborator

Thank you for your kind words. I'm glad to hear that the work meets your expectations and works perfectly. Regarding your question about the ldmks for custom test dataset, the ldmks we used are predicted by a pre-trained face tracker from https://microsoft.github.io/DenseLandmarks/, which is maintained by another group in MSR and not publicly available at the moment.
Therefore, an alternative way to run our model with custom dataset is to use public sparse ldmks. You can use any face landmark detector and connect the predicted ldmks using color lines like https://arxiv.org/abs/2011.04439. We will consider releasing a public available version if possible. I apologize for this inconvenience and hope you understand.

@jinwonkim93
Copy link
Author

Thank you!

@m-pektas
Copy link

Thanks for the great work @ForeverFancy !! You explained "ldmks" in your comment above, but what about thetas? How can I obtain it?

@ForeverFancy
Copy link
Collaborator

It's actually the transformation matrix used to align the face to the center. For example, you could refer to this blog, while we use 5 keypoints instead of 2 in the blog to align the face.

@psurya1994
Copy link

@ForeverFancy Can you describe in more detail, what you mean by "connect the predicted ldmks using color lines"? I wasn't able to find anything related to "color lines" in the paper.

It seems to me like you're suggesting we train the landmark transformer from the paper, did I get that right?

@Thraick
Copy link

Thraick commented Apr 26, 2023

I followed the instruction and generate the imgs, ldmaks and thetas. What is the src_0_id.npy? And how can i generate it?

@ForeverFancy
Copy link
Collaborator

I followed the instruction and generate the imgs, ldmaks and thetas. What is the src_0_id.npy? And how can i generate it?

Hi, you could refer to #10 for more detail.

@ForeverFancy
Copy link
Collaborator

@ForeverFancy Can you describe in more detail, what you mean by "connect the predicted ldmks using color lines"? I wasn't able to find anything related to "color lines" in the paper.

It seems to me like you're suggesting we train the landmark transformer from the paper, did I get that right?

Hi, the code for connecting ldmks with color lines is here.

@qiuyuzhao
Copy link

in dataset.py src_ldmk_norm.shape is (58, 2), is this the key point of the human face? in face_alignment point of the human face is (68 ,2 )Does this affect the results?

@liliya-imasheva
Copy link

liliya-imasheva commented Dec 6, 2023

Did anyone manage to try it on a custom data set (custom source image+custom driving video)? I have created the landmarks (ldmk), transformation matrices (theta), facial embeddings (id), and connectivity.tsv, I've also tried it in different ways, but it didn't seem to produce acceptable results even close to the ones they published in the paper. The landmarks in the suggested by @ForeverFancy source are not as dense as the ones used in the paper, and I didn't find any open source that would produce similar landmarks. If someone knows how to make it work, please let me know, I would love some further directions where to look for the solution.

@Hujiazeng
Copy link

@liliya-imasheva Did your solution solve the problem?

@Hujiazeng
Copy link

I followed the instruction and generate the imgs, ldmaks and thetas. What is the src_0_id.npy? And how can i generate it?

Hi, Is the project working? Can you share the pipeline

@liliya-imasheva
Copy link

@liliya-imasheva Did your solution solve the problem?

no, I also tried denser landmarks, very similar to what they have in the paper, but it didn't help either.
I ended up using another model, Thin Plate Spline Motion Model, https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model?tab=readme-ov-file, it gave rather good results.

@Hujiazeng
Copy link

@liliya-imasheva Did your solution solve the problem?

no, I also tried denser landmarks, very similar to what they have in the paper, but it didn't help either. I ended up using another model, Thin Plate Spline Motion Model, https://github.com/yoyo-nb/Thin-Plate-Spline-Motion-Model?tab=readme-ov-file, it gave rather good results.

thank you! Did you use pre-trained face landmarks detector instead of Keypoint Detector in this work

@liliya-imasheva
Copy link

@Hujiazeng, for this one Keypoint Detector was working well, so I didn't try any other landmarks.

@alasokolova
Copy link

Hi, @liliya-imasheva
Could you please explain how you computed theta?

@liliya-imasheva
Copy link

@alasokolova honestly I don't remember and can't find it right now, but I followed some information given here in the issues, something like here: #4 (comment), and I think there is more in other issues. But as I said, I wasn't able to get acceptable results with a custom dataset for this model, so maybe the way I computed them is not actually the best :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants