Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create MMALISPResizer object #582

Open
dickh768 opened this issue Aug 10, 2019 · 14 comments
Open

Unable to create MMALISPResizer object #582

dickh768 opened this issue Aug 10, 2019 · 14 comments

Comments

@dickh768
Copy link

I am trying to create the more efficient ISPresizer object using the VPU. The documentation indicates that this is only available on more recent firmware but since that is dated 2017 and my Pi 3b is fairly recent and fully updated I am assuming that this is not the problem.

The initial problem is that creating the object fails with an error message that it is expecting 1 output and finding 2 in the hardware. This seems to be an inconsistency between mmalobj.py (which declares 1 output) and the underlying OMX.broadcom.isp which indicates that there are two outputs. I can bypass the initial error by simply changing the port count in mmalobj.py however I then get further errors related to negotiating the port format.

This seems to be a bug - are there any updates in the pipeline to address this?

@tmm1
Copy link

tmm1 commented Aug 21, 2019

What errors are you getting related to the port format?

@tmm1
Copy link

tmm1 commented Aug 22, 2019

Have you tried OMX.broadcom.resize which only has one output

@dickh768
Copy link
Author

Sorry for the delay - just got back from holiday..

I now have a clean install of Raspbian Stretch, with update/upgrade and configured with 160M of video memory.

As a relative Noob I have so far just been trying simple python programming using the mmalobj library e.g. MMALCamera(), MMALVideoEncoder() - mostly very successful but I've stumbled trying to get MMALISPResizer() to work.

Initially I get the error: 'picamera.exc.PiCameraRuntimeError: Expected 1 outputs but found 2 on component b'vc.ril.isp' - this occurs when instantiating the object.

I can sidestep this by changing line 2481 in mmalobj.py from "opaque_output_subformats = (None,)" to either opaque_output_subformats = (None,) * 2 or opaque_output_subformats = ('OPQV-single',) * 2.

Either option then throws an identical error when you try to 'connect' the resizer to the camera. The Error messages are:

Traceback (most recent call last):
File "/home/pi/ISPtest.py", line 32, in
resizer.inputs[0].connect(camera.outputs[1])
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1296, in connect
return other.connect(self, **options)
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1304, in connect
return MMALConnection(self, other, **options)
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 2141, in init
super(MMALConnection, self).init(source, target, formats)
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1986, in init
self._negotiate_format(formats)
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 2017, in _negotiate_format
set(self._target.supported_formats)
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1056, in supported_formats
mp = self.params[mmal.MMAL_PARAMETER_SUPPORTED_ENCODINGS]
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1452, in getitem
prefix="Failed to get parameter %d" % key)
File "/usr/lib/python3/dist-packages/picamera/exc.py", line 184, in mmal_check
raise PiCameraMMALError(status, prefix)
picamera.exc.PiCameraMMALError: Failed to get parameter 1: Out of resources

Any suggestions gratefully received :-)

@dickh768 dickh768 reopened this Aug 29, 2019
@dickh768
Copy link
Author

I must have closed this by mistake :-\

Any additional suggestions would be welcome

@6by9
Copy link
Collaborator

6by9 commented Aug 29, 2019

Yes, the ISP currently has two outputs as it supports producing two images simultaneously (the second has to be at a lower resolution than the first). It'll be gaining a third port soon to pass out image statistics.

At a guess the Python library isn't passing in a large enough structure for the mmal_port_parameter_get of MMAL_PARAMETER_SUPPORTED_ENCODINGS to hold all the encodings that the ISP can support.

https://github.com/waveform80/picamera/blob/master/picamera/mmal.py#L1818 would appear to define MMAL_PARAMETER_ENCODING_T as having 30 slots. I thought the ISP only had about 22 supported encodings, but I'd guess that it is now more, and MMAL is returning "Out of resources" as the supplied structure is too small.

@pwr33
Copy link

pwr33 commented Jun 8, 2021

Did anyone figure out why can't use MMALISPResizer, getting same error as reported initially... ispresizer has too many output ports

using picamera in just mmal style.

shame because I think it does exactly what I want in a single block, 2 differently encoded (1 large jpeg one small rgb) and resized images from one camera port.

That is assuming it is in the pi zero and zero w firmware for latest buster raspos

tried increasing slots at line 1818 but still no luck.

can probably use a splitter a resizer and two encoders, I will try that. But the single block approach seems ideal.

OK so changed
line 1814 of mmal.py to * 50
line 2473 in mmalobj.py to (None,) * 3

initial error goes away

something a bit deeper a problem

print(ispresizer.inputs[0].supported_formats)

Traceback (most recent call last):
File "pwrmmal.py", line 89, in
print(ispresizer.inputs[0].supported_formats)
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1056, in supported_formats
mp = self.params[mmal.MMAL_PARAMETER_SUPPORTED_ENCODINGS]
File "/usr/lib/python3/dist-packages/picamera/mmalobj.py", line 1452, in getitem
prefix="Failed to get parameter %d" % key)
File "/usr/lib/python3/dist-packages/picamera/exc.py", line 184, in mmal_check
raise PiCameraMMALError(status, prefix)
picamera.exc.PiCameraMMALError: Failed to get parameter 1: Out of resources

@6by9
Copy link
Collaborator

6by9 commented Jun 8, 2021

https://github.com/waveform80/picamera/blob/master/picamera/mmalobj.py#L2552 wants to be

    opaque_output_subformats = (None,) * 3

https://github.com/waveform80/picamera/blob/master/picamera/mmal.py#L1818 would appear to define MMAL_PARAMETER_ENCODING_T as having 30 slots. I thought the ISP only had about 22 supported encodings, but I'd guess that it is now more, and MMAL is returning "Out of resources" as the supplied structure is too small.

The ISP component supports even more formats now - 62 input formats, and 18 formats on output[0], and 11 on output[1].

shame because I think it does exactly what I want in a single block, 2 differently encoded (1 large jpeg one small rgb) and resized images from one camera port.

Not quite. The low res output can only be a YUV format, not RGB.
The ISP runs as YUV internally, and then converts to RGB via a CCM on the high res output path only.

@pwr33
Copy link

pwr33 commented Jun 9, 2021

Yeah spent some time messing around, YUV is cool, the Y channel is better than taking the green channel of RGB, and the picamera trick described in docs of only supplying buffer size of the Y channel also cool, using 'I' type of input to pil the Y channel makes a great grayscale image and way less memory use.

picamera has some of the best documentation I have seen, excellent examples, (a most excellent library generally, a vid port capture to jpeg file at the actual speed of camera set to 10fps at half v1 sensor resolution). Got the mmal encoder working OK from a YUV numpy array so can do the ispresizer thing using a longer capture chain anyway, just get a large YUV capture and if want to save it then use mmal encoder on the numpy buffer

maybe I'll try changing it to 70 then, and see what happens... but having tested out YUV... just still got to test the basic resizer, where to invest time eh! I think perhaps trying to use mmal callbacks in python not as rewarding as diving into the raspistillyuv C code eh!

@pwr33
Copy link

pwr33 commented Jun 15, 2021

Setting MMAL_PARAMETER_ENCODING_T to 63 slots works OK

tested using dummy buffers and one output captured, against the basic resizer and actually basic resizer is faster.

being as how can use dummy buffers to the jpeg encoder and the pipeline I want is big image maybe saved as jpeg and small image definitely to test, just using resizer on a picamera yuv capture which works anyway is probably best route.

(though a big jpeg image, a medium sized jpeg image and a small test yuv image would be best, could probably do that with a splitter)

weighing up against the hassle of not just being able to do a pip install of picamera into a new venv if I want to use extra features. given not a lot of release activity here.

Is there anywhere that explains what the 3rd output of the ispresizer is? what the stats are? It is a 10k binary buffer of something or other... perhaps worth trying to reverse engineer if its useful for motion detection? How an ai would do it... f@ck knows what it is... does it work for this purpose... LOL

funny the problem of wifi bandwidth from a pi zero actually running motion detect realtime on a busy road is similar to problem of transmission from mars eh! Did that rock just move???? What resolution image available, only transmit on necessity eh! LOL!

maybe I have not noticed the benefit of dropping picamera completely and setting up an mmal pipeline and calling a single "blit" function from capture to several outputs, I may have time to mess around some more, maybe a fork of just the mmal stuff might be simplest... picamera is cool for what it is already.

anyway... cool! Thanks.

@pwr33
Copy link

pwr33 commented Jun 19, 2021

well, I forked it, and if I'm just using a splitter, ispresizer and two encoders it does work, 10fps solid, producing 2 jpeg files on disk or memory buffer and a smaller test buffer , 15fps-ish max but I was only aiming for 10fps (on a pi zero-w)

uploaded a simple test prog to the fork

@pwr33
Copy link

pwr33 commented Jun 26, 2021

isn't it time ls and rsync did not fail on a simple wildcard like *small* LOL

no transfer speed problems, but.... fipping ls and rsync fail... no problem with data quantity, but now too many filenames...

(OK, workarounds, but... ) Actually I will first try reducing the stupid-long filenames I am using LOL.

I mean it (the raw mmal pipeline speed) is better than OK it is great in lots of ways, in daylight, but trawling through the reams of picamera code that produce the the great python user interface I noticed definitions will be missing of the actual massive amount of format conversions that the ispresizer provides (you can list them OK with an existing function) which are not defined as constants... so that is something also needs doing....

but can just pass an integer rather than the ordered typedef lookup based on the output of seeing what it supports.

@pwr33
Copy link

pwr33 commented Jun 28, 2021

Funny, regarding how good the manual is, I was messing with trying to set "sports" etc. mode and just used the mmal brightness parameter example and at 3 fps at night I am getting as good (actually better) images as I would have expected from picamera at 1 second exposure (always seemed some competing thing happening when setting parameters in picamera)

also funny, 3fps is too fast for this road with just a 32gb sd card LOL I mean I was estimating 11,000 cars per day passed this window.... and yes... does seem to be more traffic after "lockdown" than before... quite possibly less lorries...

what I have noticed are some long-delay dropouts I was wondering if some wifi send-loads-of-crap attack could hang up the processor ???? something is!!!! at indeterminate intervals... well soon to be determinate as I added some time-logged warnings about loop-time delays.....

anyway, jaw dropped level awesomeness! Thanks again

@pwr33
Copy link

pwr33 commented Jun 29, 2021

in my motion detect prog based on my blitbuffer example in my fork

determinate-wise it is the rsync causing some dropouts... running rsync daemon may help I guess

indeterminate (as-yet) wise, up to 2 second dropout in a loop... not very often

running nice -18 maybe broadly helped as did maybe switching from default cpu-governer of power save to ondemand

but still, on occasion massive dropouts.. (clucking mandb or some other clucking stuff perhaps???)

actually maybe it is just the nature of python deciding to a bit of its own thing??? nice, like a tiger or lion! constrictor eh!

but... generally it is chuffing awesome... heard this car tornado sound (not the a-hole boy racers that reckon the loudest exhaust is best) earlier and was able to determine an unmarked police car with hidden police lights flashing going past, followed by a marked police car flying past the window...

I mean this was in low-light 9.30ish pm

precise model of car... maybe with a 100w IR illuminator....

@pwr33
Copy link

pwr33 commented Jul 1, 2021

last addition to the rant... perhaps...

motion detection is very scene specific... always trade-offs...

the blitbuffer seems stable, reckon its when come to write the buffers to file, especially writing one image back... so starting in this scene what is usually a large chain of cars going past, roughly 6 frames per car when they are doing 30mph ish... with a double write to sd....

and then doing stuff also on that capture node over ssh.. like rsync or looking at the log file.... or both at the same time ....

reckon I'll have a crack at getting gentoo up... I see raspbian lite has preempt voluntary, need to essentially put wifi at a lower priority than the python app... doing threads in python to write the file may also help.... lot more work trying to push a single core pi zero to it's limits....

I also wondered if it was a problem I encountered in my own meanderings, that has happened with picamera... some aspects are workarounds to an old version of the firmware which has improved since....

like do not seem to need to run the preview port with real or nullsink... balances reasonably set on this scene anyway....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants