-
Notifications
You must be signed in to change notification settings - Fork 876
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converting numpy array to video #246
Comments
not sure if this is what you were asking, but here is some code to save frames from memory straight to a video file. if you chop this up a little you could hack it into your initial code and avoid writing the jpgs to disk: def vidwrite(fn, images, framerate=60, vcodec='libx264'):
if not isinstance(images, np.ndarray):
images = np.asarray(images)
n,height,width,channels = images.shape
process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(fn, pix_fmt='yuv420p', vcodec=vcodec, r=framerate)
.overwrite_output()
.run_async(pipe_stdin=True)
)
for frame in images:
process.stdin.write(
frame
.astype(np.uint8)
.tobytes()
)
process.stdin.close()
process.wait() Edit 2020-01-28: My working version of this function is backed by a small class, implemented in my python-utils/ffmpeg.py |
@kylemcdonald Thank You it worked How can I alter |
Is @kylemcdonald's code example still the preferred way to stream frames from in-memory numpy arrays to an |
When I try to run @kylemcdonald's function on an image array of 228x2048x2048x3 np.uint8, only 65 frames are saved, and it looks like a bunch of them are skipped
Am I missing something here? |
@jblugagne I encountered a related problem - I was getting duplicated frames in my stream. I had to pass the The
Since our "input" is a stream of raw video frames over a pipe, it should not contain any timestamps at all, so it makes sense that we would need some mechanism of specifying timestamps like the "input" option. I don't fully understand the behavior of the "output option". If our input stream has no timestamps, how did it decide to drop frames for you, but duplicate them for me? Are the timestamps generated implicitly by the real wall clock time when the frames arrive over the pipe? Regardless, dropping and duplicating frames are both bad for this application. |
@jpreiss thank you! That solved my problem. Not sure what is going on with the r output option thing either. |
Is there a way to do this where you pass in a numpy array (audio in this case) and get a numpy array in return? |
When trying to run @kylemcdonald's function written above with the modifications of frame rate given by @jpreiss , I am running into an error of Broken Pipe. The input is an 15000x241x369x3 np.uint8 array. The error is as follows:
It seems that this error is raised while trying to write the 2nd frame. Did anyone encounter a similar issue or know of a fix? Thank you in advance. |
@jaehobang Were you able to figure out this problem? Because I am having the same problem with a |
vidwrite('test', ...) will produce broken pipe error, but vidwrite('test.mp4', frames) will be fine |
@kylemcdonald Thank you so much. I was able to implement your provided lines of code in my use case. However, I need help with one part. Is there a way to have the separate h264 encoded frames instead of one .h264 file. This is what I am doing exactly: def start_streaming(self,channel_name):
|
@jpreiss
@jpreiss @kylemcdonald I am facing similar issues because the original input video (from where frames need to be extracted out) has variable FPS. The frames are extracted using OpenCV's |
@ayushjn20 sorry, I have no idea how to work with variable frame rates. |
import ffmpeg
import io
def vidwrite(fn, images, framerate=60, vcodec='libx264'):
if not isinstance(images, np.ndarray):
images = np.asarray(images)
_,height,width,channels = images.shape
process = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height), r=framerate)
.output(fn, pix_fmt='yuv420p', vcodec=vcodec, r=framerate)
.overwrite_output()
.run_async(pipe_stdin=True, overwrite_output=True, pipe_stderr=True)
)
for frame in images:
try:
process.stdin.write(
frame.astype(np.uint8).tobytes()
)
except Exception as e: # should probably be an exception related to process.stdin.write
for line in io.TextIOWrapper(process.stderr, encoding="utf-8"): # I didn't know how to get the stderr from the process, but this worked for me
print(line) # <-- print all the lines in the processes stderr after it has errored
process.stdin.close()
process.wait()
return # cant run anymore so end the for loop and the function execution In my case , it was just Unknown encoder 'libx264' because I hadnt't installed that library |
@kylemcdonald do you know how to achieve this with rgba? My numpy array shape is (m,n,4) with the 4th value being the opacity between 0-1. I want to overlay my video on a map so I need some parts to be transparent. |
Is it possible to implement CUDA support in this python wrapper? I have made a small repository where I write a constant random frame to video using ffmpeg with CUDA support, but I am not getting the performance I expected. Maybe my ffmpeg flags are not correct? Any help would be very appreciated :) PS I built ffmpeg with CUDA support enabled |
@jaehobang Have you figured it out? I have the same problem... |
I'm following as well because I have a similar issue. My input buffer comes from a stream from youtube. In my case, the conversion works out well, but I still get the BrokenPipe exception at the end. Any idea why this happens? from io import BytesIO
buff = BytesIO()
streams = pytube.YouTube('https://www.youtube.com/watch?v=xxxxx').streams
streams.filter(only_audio=True).first().stream_to_buffer(buff)
buff.seek(0)
process = (
ffmpeg
.input('pipe:', ss=420, to=430, f='mp4', )
.output('out.wav', ac=1, ar=16000, acodec='pcm_s16le')
.overwrite_output()
.run_async(pipe_stdin=True)
)
process.stdin.write(buff.read()) # <-- BrokenPipe here
process.stdin.close()
process.wait() |
I'm using OpenCV for processing a video, saving the processed video
Example:
Source file is FULL HD
2 minutes
clip inavi
format withData Rate 7468kbps
Saved file is FULL HD
2 minutes
clip inavi
format withData Rate 99532kbps
this is confusing
if i save each frame and give it to input, I get an error in the
.output
saving there is no such fileHow do i save the video as the same size as the source using
ffmeg-python
?The text was updated successfully, but these errors were encountered: