You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 22, 2024. It is now read-only.
Hi Ian, while we were working on the convolution changes, you commented (here) that horizontal resolution is padded to 32 bytes and vertical is padded to 16 bytes. At the time, I also found comments in the Pi forums to that effect, too. However, I'm running into a problem with raw frames using a v1 camera in mode 4, which is an output resolution of 1296 x 972.
Today I started playing with that image effects delegate idea, and when I switched my test to raw input, it failed to invoke the delegate -- which was the problem I had during the convolution PR. So I turned on debug logs, and I saw the same error message that led me to the padding issue originally:
Something went wrong while processing stream: Resolution 1296x976 has no recommended cell counts.
The vertical resolution is padded (972 to 976), but 1296 isn't evenly divisible by 32, it's divisible by 16. The cell-count lookup table only lists 32-byte offsets, so my table has 1312 x 976. To be clear, this does work properly for encoded images (JPEG input rather than RGB24). But there is no match for 1296 x 976 in the table so it throws the exception. (This is also the exception-handler in StreamCaptureHandler that I pointed out makes a non-logging app fail silently, and I still question whether it should be swallowing exceptions when there is no logging.)
I think we discussed that it might be a bug, but then it got lost in the shuffle.
It seems horizontal reflects the output resolution and vertical reflects the padded resolution. For processing purposes, we need the padded resolutions, but probably most library clients will want the output resolution, so it seems to me that both should be stored to ImageContext. (See below, it really is padded to 16 ... and I suppose ImageContext ought to reflect the buffer, a library client can always get the requested resolution from the camera config.)
The text was updated successfully, but these errors were encountered:
Because of the way the HW works, the width needs to be expanded to the nearest 32 bytes. So 1296 gets rounded up to 1312. The amount of valid data is still the same.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi Ian, while we were working on the convolution changes, you commented (here) that horizontal resolution is padded to 32 bytes and vertical is padded to 16 bytes. At the time, I also found comments in the Pi forums to that effect, too. However, I'm running into a problem with raw frames using a v1 camera in mode 4, which is an output resolution of 1296 x 972.
Today I started playing with that image effects delegate idea, and when I switched my test to raw input, it failed to invoke the delegate -- which was the problem I had during the convolution PR. So I turned on debug logs, and I saw the same error message that led me to the padding issue originally:
The vertical resolution is padded (972 to 976), but 1296 isn't evenly divisible by 32, it's divisible by 16. The cell-count lookup table only lists 32-byte offsets, so my table has 1312 x 976. To be clear, this does work properly for encoded images (JPEG input rather than RGB24). But there is no match for 1296 x 976 in the table so it throws the exception. (This is also the exception-handler in
StreamCaptureHandler
that I pointed out makes a non-logging app fail silently, and I still question whether it should be swallowing exceptions when there is no logging.)I think we discussed that it might be a bug, but then it got lost in the shuffle.
It seems horizontal reflects the output resolution and vertical reflects the padded resolution. For processing purposes, we need the padded resolutions, but probably most library clients will want the output resolution, so it seems to me that both should be stored to(See below, it really is padded to 16 ... and I supposeImageContext
.ImageContext
ought to reflect the buffer, a library client can always get the requested resolution from the camera config.)The text was updated successfully, but these errors were encountered: