Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example of custom PiVideoEncoder fails #128

eggnot opened this issue Jul 21, 2014 · 5 comments

example of custom PiVideoEncoder fails #128

eggnot opened this issue Jul 21, 2014 · 5 comments


Copy link

@eggnot eggnot commented Jul 21, 2014

example of creating custom PiVideoEncoder from

fail with traceback

Traceback (most recent call last):
  File "", line 41, in <module>
  File "/usr/lib/python2.7/dist-packages/picamera/", line 976, in start_recording
   encoder.start(output, options.get('motion_output'))
TypeError: start() takes exactly 2 arguments (3 given)
Copy link

@eggnot eggnot commented Jul 21, 2014

i can guess correct code is

import picamera
import picamera.mmal as mmal

# Override PiVideoEncoder to keep track of the number of each type of frame
class MyEncoder(picamera.PiVideoEncoder):
    def start(self, output, motion_output=None):
        self.parent.i_frames = 0
        self.parent.p_frames = 0
        super(MyEncoder, self).start(output, motion_output)

    def _callback_write(self, buf):
        # Only count when buffer indicates it's the end of a frame, and
        # it's not an SPS/PPS header (..._CONFIG)
        if (
                (buf[0].flags & mmal.MMAL_BUFFER_HEADER_FLAG_FRAME_END) and
                not (buf[0].flags & mmal.MMAL_BUFFER_HEADER_FLAG_CONFIG)
            if buf[0].flags & mmal.MMAL_BUFFER_HEADER_FLAG_KEYFRAME:
                self.parent.i_frames += 1
                self.parent.p_frames += 1
        # Remember to return the result of the parent method!
        return super(MyEncoder, self)._callback_write(buf)

# Override PiCamera to use our custom encoder for video recording
class MyCamera(picamera.PiCamera):
    def __init__(self):
        super(MyCamera, self).__init__()
        self.i_frames = 0
        self.p_frames = 0

    def _get_video_encoder(
            self, camera_port, output_port, format, resize, **options):
        return MyEncoder(
                self, camera_port, output_port, format, resize, **options)

with MyCamera() as camera:
    print('Recording contains %d I-frames and %d P-frames' % (
            camera.i_frames, camera.p_frames))
Copy link

@waveform80 waveform80 commented Jul 23, 2014

Argh - you're absolutely correct, both about the bug and the fix. I'll get that into the docs for 1.7 (it'll appear under "latest" until then)

@waveform80 waveform80 added the bug label Jul 23, 2014
@waveform80 waveform80 added this to the 1.7 milestone Jul 23, 2014
@waveform80 waveform80 self-assigned this Jul 23, 2014
Copy link

@waveform80 waveform80 commented Aug 2, 2014

Fixed in 2ab001d

@waveform80 waveform80 closed this Aug 2, 2014
Copy link

@eggnot eggnot commented Aug 4, 2014

great! can you provide a example of using splitter with custom encoder? the idea to have one h264-hd stream for recording and another one yuv\rgb in low resolution for analysis.

Copy link

@waveform80 waveform80 commented Aug 4, 2014

If you're performing analysis on the YUV data, there's not much point in using a custom encoder implementation; a custom output implementation would be much simpler and give you almost all the same benefits. As a rough rule of thumb, a custom encoder is only really useful with the H.264 format as that's the only one that includes extra info via the buffer header flags that are passed to the encoder. For MJPEG and all unencoded formats, the buffer header flags are all quite boring (especially in the case of unencoded formats where they just tell you that every callback is a frame-end and all frames are the same size). I should probably add all this to the docs in the custom encoder section...

Anyway, here's a quick example of recording a high-res H.264 stream, and a resized low-res YUV stream which is analyzed via a custom output (obviously this won't work until I release 1.7 as there's no support for YUV video output in 1.6). The custom output uses a bit of a dirty hack on PiYUVArray to make things easier:

import picamera
import picamera.array
import numpy as np

class AnalyseOutput(picamera.array.PiYUVArray):
    def write(self, b):
        result = super(AnalyseOutput, self).write(b)
        self.buffer = b''
        return result

    def flush(self):
        # Ignore flush when buffer is empty (as it will be when
        # the output is closed)
        if self.buffer:
            super(AnalyseOutput, self).flush()

    def analyse(self, a):
        # Do something with the numpy array here for analysis
        # As an example, we'll calculate the maximum luminance
        # value:
        print(a[..., 0].max())

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    camera.framerate = 24
    # Start the high-res recording
    # Start the low-res recording to the custom output
            AnalyseOutput(camera, size=(320, 180)),
            'yuv', resize=(320, 180), splitter_port=2)

Hmm ... having written that I should think about adding equivalents to PiYUVArray and PiRGBArray to picamera.array for analysis ... perhaps PiYUVAnalysis or something ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants