-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with the output when evaluating with the pretrained model #7
Comments
That is weird. The channel wise mean is subtracted from the frames before passing them as input and added back to the output. Maybe something went wrong with that. Can you post your output after running the script. Also mention the pytorch version you have installed. |
pytorch version is 1.0.0 This is my output after running the script:
|
I am getting the same issue when running on CPU. I am working on it. |
Removed channel wise mean subtraction from preprocessing of input. Output quality might degrade. Problem: Some image pixel values getting loaded are different in CPU vs GPU.
I have added a temporary fix to the script. You can try running it now. |
That fixed it thanks. |
Hi, @avinashpaliwal @InterestingWalrus
|
Can you post more details. Like cmd/terminal output of script. Post some example input and output video as well. Just to clarify, you are getting output frames where every interpolated frame is the copy of the original frame before it? |
This is my output after running the script: [root@7a382a96-08ca-454d-b42c-d1a554518421 Super-SloMo]# python video_to_slomo.py --video misc/original.gif --sf 2 --checkpoint ./SuperSloMo.ckpt --fps 30 --output videos/out_gif.mp4 |
Input video is your original gif ; this is my output of video :out_gif.zip |
I am getting correct results on my end. The video you have posted has no interpolated frames, they look like exact copies of previous frames. I haven't tested the whole thing on linux (I am using windows), so I will try that. In the mean time try running it again with configuration pytorch 0.4.1, python 3.6. Comment out this line in the script and check the hidden folder tmpSuperSloMo for the generated output frames. There will be a resolution mismatch which I am fixing, so ignore that. |
@YaoooLiang @avinashpaliwal, if I don't specify the ffmpeg directory, the script doesn't run. Not sure why it's running for you. Also you seem to be using an old version of ffmpeg. Try updating ffmpeg maybe? |
@InterestingWalrus @avinashpaliwal |
Because I had changed extract_frames and create_video code , so it was worked for me . |
Like I said, I am using windows. The ffmpeg comes in a zip which you have to extract. That's why you have to provide the path for ffmpeg.exe if it's not in your project folder. I have not tested the project on linux, so I don't know the problems yet. I will work on that, but it will take some time. |
I don't know why you are getting such results. I will have to test the project on linux. Also, you said you modified the code, so did you also modify the code in section where frame is interpolated. Post your modified script so I can run it on my end. |
@avinashpaliwal |
I was able to get this working just fine on Linux myself. I'm on Peppermint (Ubuntu-based), using PyTorch 0.4.1 with this conda build: I had to uninstall ffmpeg from conda because the included package doesn't include libx264, so I used apt to install it plus the My test video is here and the results can be found here. They seem to line up with the poor results examples, but you can see it working well on the mans' pants, background environment, and smaller figures walking in the background. My assumption is that the model needs more data from overlapping biped movement and limb rotation perpendicular to the camera (potentially more non-Caucasian data as well?), but is otherwise working as expected. |
Hi, @avinashpaliwal |
I am currently busy with other tasks and am unable to take out time to test the script on Linux. Try following Godatplay's approach and see if it works. I will post an update about Linux after sometime. |
Specifically I'm on Peppermint 9, which is based on Ubuntu 18.04 LTS, but my guess is that this is an ffmpeg issue. I wanted to maintain the same 25 FPS as the input video, but 4x longer, so I used |
I don't think this is an ffmpeg issue. The input frames in |
Right. It is mostly likely something to do with the pytorch library on linux. I also used conda, so maybe installing pytorch from some other source is the issue. |
I used conda too. I've tried pytorch 0.4.1 and the current version, both CPU, but the results were the same. |
Ah ok. I guess I was using GPU on Linux, so likely a pytorch CPU-specific issue. |
That CPU result sholdn't be like this after the temporary fix. Are you using the latest commit? |
Yes. I was using commit e8508bd |
Were you running with python2 in linux? |
Yes |
I had met the same problem. |
Hi, The fix for cpu color issues is the following:diff --git a/video_to_slomo.py b/video_to_slomo.py
If you debug, you ll see that it falls always in the gpu path, because the if statement is not correct. |
The problem with python2 was in integer division. It was fixed in one of the pull requests. |
Can this be pushed in the repository. This should fix the bug. |
Hi, forgive me if this sounds a bit dense, I don't really know anything about machine learning but I have a passing interest in making slomo videos. I'm currently evaluating on a CPU (AMD Threradripper 1920x) as I don't have a CUDA device but I'm having trouble with the output from the video_to_slomo.py script. As a reference, I used ffmpeg to convert your original gif to mp4 and tried to create a video but the output looks a bit off. This is what the converted video looks like: Link here. Another video I tried also looks like this. Any ideas why this is?
Thanks
The text was updated successfully, but these errors were encountered: