Arrays (PiMotionArray, PiRGBArray,...) use camera.resolution to get width, height, but should use video stream resolution (if camera.start_recording has resize parameter set).
import picamera import picamera.array with picamera.PiCamera() as camera: with picamera.array.PiMotionArray(camera) as output: camera.resolution = camera.MAX_RESOLUTION camera.framerate = 4 camera.start_recording('foo.h264', resize=(1296, 972), motion_output=output) camera.wait_recording(5) camera.stop_recording()
PiMotionArray class fetch image width and height from self.camera.resolution (in this case camera.MAX_RESOLUTION), but actual stream might be resized (e.g. (1296, 972)).
As a workaround I defined DetectMotion and manually inserted resolution
class DetectMotion(picamera.array.PiMotionAnalysis): def __init__(self, camera): width, height = (1296, 972) self.cols = (width + 15) // 16 + 1 self.rows = (height + 15) // 16
and it works correctly, but I guess these classes should get the right resolution automatically.
The text was updated successfully, but these errors were encountered:
Ahh, I was wondering when this would come up. Unfortunately, while I can detect the resolution of the camera trivially in the array classes, I can't detect the value of the resize parameter easily. I guess the best solution for now is to allow the resolution to be specified in the array constructor.