Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the cross-domain video discriminator #1

Closed
Mofafa opened this issue Jun 29, 2021 · 1 comment
Closed

Question about the cross-domain video discriminator #1

Mofafa opened this issue Jun 29, 2021 · 1 comment

Comments

@Mofafa
Copy link

Mofafa commented Jun 29, 2021

Hi, thanks for your great work!

I have a question about the cross-domain video discriminator.

According to your paper, you can learn to synthesize video content from one dataset A (such as Anime-Face) while motion part from another dataset B (such as VoxCeleb). In this mode, I think the video discriminator will first learn how to classify the anime and the real person's contents, rather than distinguish meaningful motions. How do you ensure that the video discriminator is helpful during training in this mode?

@bluer555
Copy link

bluer555 commented Aug 2, 2021

Hi Mofafa,

We also noticed this problem. The discriminator can easily reject a synthesized video by content not motion. We tried to solve this problem with some motion-sensitive design (e.g. discriminator with optical flows as the input) but the results are not good. So our motion for cross-domain is not as good as the in-domain case, I think it's an interesting direction for future work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants