Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we train the model on a single video and can we add our own driving video #9

Open
vinay345 opened this issue May 1, 2021 · 12 comments

Comments

@vinay345
Copy link

vinay345 commented May 1, 2021

No description provided.

@AliaksandrSiarohin
Copy link
Collaborator

Why you need to train on a single video?

@ExponentialML
Copy link

Yes, but the results will be satisfactory. I'm guessing that you're looking to do some kind of quick, supervised motion transfer and get around training the model as it takes a long time (I don't blame you, it's resource heavy). It's best that you create a dataset that you can train with the specific thing you want, and infer on that. In a scenario like this, it's possible you can create a synthetic dataset (for example, 3D animations of what you want where you can control the scene) then train on that.

@AliaksandrSiarohin I think the reason why is that it might be hard to build in the wild datasets for something you want. For example, if you wanted to train a model on people doing back flips, it would be hard to find sufficient data to train on due to the different camera angles. Datasets like Taichi are easier due to stationary cameras.

To answer the second question, yes.

@vinay345
Copy link
Author

vinay345 commented May 2, 2021

When i add my own driving video and the source image it doesnt work,what could be the reason
Second thing can the driving video be anything other than the datasets videos that you have used

@AliaksandrSiarohin
Copy link
Collaborator

I can't say, show an example.

@Adorablepet
Copy link

When I use my own driving video and the original image, the faces in the generated video are different from those in the original image. I want to ask, what is causing this?The effect of generating video is not good. Thanks.

@AliaksandrSiarohin
Copy link
Collaborator

Send an example.

@Adorablepet
Copy link

@AliaksandrSiarohin
test_demo.zip
run command, as follows:

python demo.py --config config/ted384.yaml --driving_video driving_video.mp4 --source_image source_image.png --checkpoint checkpoints/ted384.pth

Thanks,

@AliaksandrSiarohin
Copy link
Collaborator

Your video is not cropped. You should crop it to make it square around the person. Plus you can try to use ted-youtube config and checkpoint.

@Adorablepet
Copy link

Your video is not cropped. You should crop it to make it square around the person. Plus you can try to use ted-youtube config and checkpoint.

I run command with ffmpeg:

ffmpeg -i driving_video.mp4 -vf scale=384:384,setdar=1:1 driving_video_crop.mp4

But result.mp4 is not so good.
test_demo_crop.zip

Is there a problem with my cropping method?Could you give a reference code? Thanks again.

@zhaoloulou
Copy link

Thank for your great work! I'm going through the same thing,Is there something wrong with my data
Desktop.zip

@zhaoloulou
Copy link

Hello, I would like to ask whether the model you provided is the best model and whether the training time and data are sufficient

@AliaksandrSiarohin
Copy link
Collaborator

Image and video crops should include upper part of the legs, see examples in Readme.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants