Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code not working #39

Open
samyak0210 opened this issue Feb 27, 2021 · 12 comments
Open

Code not working #39

samyak0210 opened this issue Feb 27, 2021 · 12 comments

Comments

@samyak0210
Copy link

Hello,

I was using your code for a video but it was giving an error while running demo_syncnet.py file. It ran fine for the example.avi but is not running for my video. Can you help me?

image

@hrzisme
Copy link

hrzisme commented Feb 28, 2021

Make sure the length of your video and audio are same.

@samyak0210
Copy link
Author

Hey,
Thank you for pointing that out. I downloaded the video from youtube so didn't expect this error.
I tried to do that using DaVinci video editing tool but since the difference in lengths is very small, its not able to detect to that precision. Can you suggest any better tool? Or is it possible if I send you the video and you can give me the detected active speakers?

@hrzisme
Copy link

hrzisme commented Mar 13, 2021

U can solve your problems by using ffmpeg

@hanbaobao950123
Copy link

I solved the problem by resizing he frames of the video to 224x224.

@EhsanRusta
Copy link

Hello, I have same issue, would you tell me how you fixed it? @samyak0210

@Momotyust
Copy link

@EhsanRusta maybe you shuld resize your video frame to 224x224, just like the example.avi

@hannarud
Copy link

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:

Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

@wllps1988315
Copy link

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:

Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

and how to filter dataset for wav2lip?

@ThetaRgo
Copy link

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:

Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

can you share some methods for preprocessing wav2lip datasets with this project? thank you .

@MisterCapi
Copy link

just change

for fname in flist:
    images.append(cv2.imread(fname))

to

for fname in flist:
    images.append(cv2.resize(cv2.imread(fname), (224, 224)))

in SyncNetInstance.py

the model was not meant to work with other shapes

@guo-king666
Copy link

注意,用途相当demo_syncnet.py有限。它仅适用于格式(即大小、大致数量)与example.avi。为了获得任何视频的结果,您需要将其提交到整个管道,如自文件后面所指出的:
完整管道:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

这里run_pipeline.py将适当的方式完成对视频进行拆除(按场景划分、检测人脸、手术等),以便run_syncnet.py能够进行其工作。

以及如何过滤wav2lip的数据集?

请问你知道如何过滤数据集了吗

@kashishnaqvi10
Copy link

Actually demo_syncnet.py has a pretty restricted usage. It will work only for videos that are similar in format (i.e. size, amount of faces) to example.avi. In order to obtain results for arbitrary video, you need to pass it throught the whole pipeline, as pointed later in the README:
Full pipeline:

sh download_model.sh
python run_pipeline.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_syncnet.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output
python run_visualise.py --videofile /path/to/video.mp4 --reference name_of_video --data_dir /path/to/output

Here run_pipeline.py will preprocess the video in a proper way (dividing by scenes, detecting faces, cropping etc.) so that run_syncnet.py would be able to do its job.

and how to filter dataset for wav2lip?

Hey, were you able to filter dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests