Not able to generate stable video stream with python #32

Harriebo opened this Issue Dec 13, 2012 · 13 comments


None yet

6 participants



I am working on stream generator for my video mapping set, but I am not able to get the image steady.
I open a v4l2loopback device with python-v4l2 ( and generate a video stream through it based on png, so can generate live video's in my vj set and still video map them and apply effects.

Test case:
1) load v4l2loopback module
2) run python:

import fcntl, numpy
from v4l2 import *
from PIL import Image
height = 600
width = 634
device = open('/dev/video4', 'wr')
capability = v4l2_capability()
print(fcntl.ioctl(device, VIDIOC_QUERYCAP, capability))
print("v4l2 driver: " + capability.driver)
format = v4l2_format()
format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
format.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB32
format.fmt.pix.width = width
format.fmt.pix.height = height
format.fmt.pix.field = V4L2_FIELD_NONE
format.fmt.pix.bytesperline = format.fmt.pix.width * 4
format.fmt.pix.sizeimage = format.fmt.pix.width * format.fmt.pix.height * 4
format.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB
print(fcntl.ioctl(device, VIDIOC_S_FMT, format))
img ='img/0.png')
img = img.convert('RGBA')
while True:

3) run Cheese or other v4l2 stream viewer.

The result is a proper colored and sized image, but it jumps every frame from left to right and always a little more to the left so you get a sliding and jumpy video result.
What am I doing wrong?

Best regards,


ps: if you woul like to see the results check: So far the LiVES, puredate, gem video mapping setup is working greath with the v4l2 streams.


could you illustrate the problem with some screenshots of a consecutive number of frames (or a very-short screencast)?
i'm having trouble to envision your exact problem.

does it work when you are using gstreamer to provide the feed? somethig like:

gst-launch \
  uridecodebin uri=file:///tmp/v4l2/img/0.png \
  ! ffmpegcolorspace \
  ! videoscale \
  ! imagefreeze \
  ! identity error-after=2 \
  ! v4l2sink show-preroll-frame=false device=/dev/video5

(using gstreamer from within python is simple)

PS: i don't know about fakebook


Thank you for the quick response :)

The video loopback devices work in all other cases fine. I can not use gstreamer cause I want to manipulate/draw images in memory and cut them in pieces to send them to different video streams to map them on different objects. The IO to write the files first would take to much performance.
I will make the test tonight as soon as I am home and send the video of the results and my issue.

PS: each triangle, square or circle is a different video stream / instance running from LiVES through v4l2 to PureData / GEM mapped in a 3D environment, controlled live with MIDI and OSCP.


I have tried your command, but I did not get it to work, I get: WARNING: erroneous pipeline: no element "v4l2sink", but I do have the gst v4l2 plugin installed.
The result of the code I posted before looks like this:
Properly messed up as you can see. Any idea on how the image gets so unstable?


I did some more test, it seems it has to do with the image size.

If I render the stream in a standard format 640x480 you see the image at least stays at the same place: but still not in the right place.

If I render the stream in 1024x768 it even stays in the right place, but the image is still distorting every so many frames:


So I got it a sort of working, but not sure if it's the right way. What I need to do for a stable video stream:
1) don't use custom resolutions, they get messy.
2) send every frame twice. I think this has to do with interlacing / top / bottom frame.
3) for 640x480 shift all pixels 260 spaces to the left in the array, other wise the image is not straight, not for 1024x768 doh...
4) play is at a slightly lower frame rate as the program can generate.

After all that it is a 99% stable every 10 sec. or so there is one buggy frame. I think it has to do that the framerate the program generates is not 100% stable.


hmm, i have forgotten why, but in Gem i convert to UYVY when writing RGBA images to a v4l2 device.
i guess this is a workaround related to your problem, but has the drawbacks that you you have to convert and there's no more alpha-channel (though in practice most video-devices create a broken alpha-channel anyhow, so most apps probably ignore it).


I will try this tonight, hope it work's. I also use the UYVY color space between LiVES and GEM cause I had issues with RGB. Hope the conversion does not slow the generator process to much.

You can maybe also tell me where I can find more on how the TOP / BOTTOM / INTERLACE field works?

I made a small try out btw this weekend at a private party of friends with my 9 fps Maya calendar I generated live:


@Harriebo I have been trying to do something like this in python and I just discovered a problem with in above python code. When fcntl.ioctl(device, VIDIOC_S_FMT, format) is called, the data in the v4l2_capability structure changes, including the values for format.fmt.pix.bytesperline and format.fmt.pix.sizeimage. I am still getting weird offsets but at least it is stable at arbitrary image sizes.


I've been playing with this some more and making slow progress. For some reason, I need to add 4096 bytes of padding at the start of each frame. There is also some corruption of the frame at the bottom. However, this code seems to work at most resolutions. It displays the Lenna test image in black and white via the v4l2loopback.

#Send image data to v4l2loopback using python
#Remember to do sudo modprobe v4l2loopback first!
#Released under CC0 by Tim Sheerman-Chase, 2013

import fcntl, sys, os
from v4l2 import *
import time
import scipy.misc as misc
import numpy as np

def ConvertToYUYV(sizeimage, bytesperline, im):
    padding = 4096
    buff = np.zeros((sizeimage+padding, ), dtype=np.uint8)
    imgrey = im[:,:,0] * 0.299 + im[:,:,1] * 0.587 + im[:,:,2] * 0.114
    Pb = im[:,:,0] * -0.168736 + im[:,:,1] * -0.331264 + im[:,:,2] * 0.5
    Pr = im[:,:,0] * 0.5 + im[:,:,1] * -0.418688 + im[:,:,2] * -0.081312

    for y in range(imgrey.shape[0]):
        #Set lumenance
        cursor = y * bytesperline + padding
        for x in range(imgrey.shape[1]):
                buff[cursor] = imgrey[y, x]
            except IndexError:
            cursor += 2

        #Set color information for Cb
        cursor = y * bytesperline + padding
        for x in range(0, imgrey.shape[1], 2):
                buff[cursor+1] = 0.5 * (Pb[y, x] + Pb[y, x+1]) + 128
            except IndexError:
            cursor += 4

        #Set color information for Cr
        cursor = y * bytesperline + padding
        for x in range(0, imgrey.shape[1], 2):
                buff[cursor+3] = 0.5 * (Pr[y, x] + Pr[y, x+1]) + 128
            except IndexError:
            cursor += 4

    return buff.tostring()

if __name__=="__main__":
    devName = '/dev/video2'
    if len(sys.argv) >= 2:
        devName = sys.argv[1]
    width = 640
    height = 512
    if not os.path.exists(devName):
        print "Warning: device does not exist",devName
    device = open(devName, 'wr')

    capability = v4l2_capability()
    print "get capabilities result", (fcntl.ioctl(device, VIDIOC_QUERYCAP, capability))
    print "capabilities", hex(capability.capabilities)

    fmt = V4L2_PIX_FMT_YUYV
    #fmt = V4L2_PIX_FMT_YVU420

    print("v4l2 driver: " + capability.driver)
    format = v4l2_format()
    format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
    format.fmt.pix.pixelformat = fmt
    format.fmt.pix.width = width
    format.fmt.pix.height = height
    format.fmt.pix.field = V4L2_FIELD_NONE
    format.fmt.pix.bytesperline = width * 2
    format.fmt.pix.sizeimage = width * height * 2
    format.fmt.pix.colorspace = V4L2_COLORSPACE_JPEG

    print "set format result", (fcntl.ioctl(device, VIDIOC_S_FMT, format))
    #Note that format.fmt.pix.sizeimage and format.fmt.pix.bytesperline 
    #may have changed at this point

    #Create image buffer
    im = misc.imread("Lenna.png")
    buff = ConvertToYUYV(format.fmt.pix.sizeimage, format.fmt.pix.bytesperline, im)

    while True:     

UPDATE: code changed to fix most of the remaining problems. Is the use of 4096 bytes of padding correct?


Unbuffered I/O did the trick for me:


while True:
    os.write(device.fileno(), buff)

Or (this might be even more clear):

open(devName, 'wrb', 0)   # 0 for unbuffered
# --snip---
while True:

You can compare the system calls with strace: with buffered I/O, writes are spilt into a small 4k chunk + a big chunk.


assuming this fixes it

@umlaeute umlaeute closed this Apr 27, 2014

Thank you, TimSC! I've been stuck on this for a few week. Here is my implementation for ConvertToYUYV(), using OpenCV and numpy, to convert BGR image to YUYV .It works much faster, but i'm not sure about image size.

def ConvertToYUYV(image):
    imsize = image.shape[0] * image.shape[1] * 2
    buff = np.zeros((imsize), dtype=np.uint8)

    img = cv2.cvtColor(image, cv2.COLOR_BGR2YUV).ravel()

    Ys = np.arange(0, img.shape[0], 3)
    Vs = np.arange(1, img.shape[0], 6)
    Us = np.arange(2, img.shape[0], 6)

    BYs = np.arange(0, buff.shape[0], 2)
    BUs = np.arange(1, buff.shape[0], 4)
    BVs = np.arange(3, buff.shape[0], 4)

    buff[BYs] = img[Ys]
    buff[BUs] = img[Us]
    buff[BVs] = img[Vs]

    return buff

Solution of @PStanS did the trick.

Instead of specifying 'wr' for the file mode, I have to use 'w' only, otherwise "IOError: [Errno 9] Bad file descriptor" would be raised when call device.write later.

If you write frames slowly, you would see that the first frame stay in the right position, and subsequent frames have a fixed offset, so it must be some kind of buffering problem as PStanS pointed out. Although it is strange that all subsequent frames stay in the right position.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment