You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There's a bad interaction when calling opj_set_decoded_resolution_factor() and then opj_set_decode_area() followed by opj_decode(), or just opj_decode()
If calling directly opj_decode(), that one will consider that the output image is at full resolution, instead of the resolution set in opj_set_decoded_resolution_factor(), unless manually modifying the image->comps[].factor values.
If calling opj_set_decoded_resolution_factor(), followed by opj_set_decode_area() and opj_decode(), and not manually setting image->comps[].factor values, currently (ie up to OpenJPEG 2.2.0 included) this somehow works on single-tiled images, provided that the coordinates passed to opj_set_decode_area() are expressed in the resolution coordinate space (ie divided by 2^res_level), instead of being passed relative to the reference grid (ie at full resolution). If doing this misuse of the API on a tiled image, not all tiles that intersect the area of interest will be selected (in practice onl the ones at the top level).
If doing this misuse of the API on a single tiled image with current master, the resulting image is corrupted due to only a subset of needed codeblocks being decoded.
The proper use of the API should be that opj_set_decode_area() is always called with coordinates expressed in the reference grid and the user isn't required to manually set the image->comps[].factor values.
…_decoded_resolution_factor() (uclouvain#1006, affect API use)
* Better document usage of opj_set_decode_area(), ie expecting coordinates
in full resolution/reference grid even if requesting at a lower resolution
factor
* Make sure that image->comps[].factor is set by opj_set_decode_area() and
opj_decode() from the value specified in opj_set_decoded_resolution_factor()
* opj_decompress: add 2 environmenet variables to test alternate ways of
using the API, namely USE_OPJ_SET_DECODED_RESOLUTION_FACTOR=YES to use
opj_set_decoded_resolution_factor() instead of parameters.cp_reduce, and
SKIP_OPJ_SET_DECODE_AREA=YES to not call opj_set_decode_area() if -d is
not specified.
uclouvain/openjpeg#1006 describe in length a current
implementation issue of opj_set_decode_area() combined with
opj_set_decoded_resolution_factor()
Currently, with all openjpeg 2.X, decoding a subwindow of a tiled JPEG2000 image
at a lower resolution results in a corrupted result: only the top-left part of
the output image is filled, due to some tiles not being decoded.
With current openjpeg master, up to uclouvain/openjpeg@5a4a101,
which implements more efficient sub-tile decoding,
decoding a subwindow of a single-tiled JPEG2000 image at a lower resolution
results in a corrupted image due to some code-blocks not being decoded.
This commit enables IIPRV to work with currently released openjpeg 2.X versions,
as well as the latest openjpeg master (rouault/openjpeg@8f92fc9
or later)
There's a bad interaction when calling opj_set_decoded_resolution_factor() and then opj_set_decode_area() followed by opj_decode(), or just opj_decode()
If calling directly opj_decode(), that one will consider that the output image is at full resolution, instead of the resolution set in opj_set_decoded_resolution_factor(), unless manually modifying the image->comps[].factor values.
If calling opj_set_decoded_resolution_factor(), followed by opj_set_decode_area() and opj_decode(), and not manually setting image->comps[].factor values, currently (ie up to OpenJPEG 2.2.0 included) this somehow works on single-tiled images, provided that the coordinates passed to opj_set_decode_area() are expressed in the resolution coordinate space (ie divided by 2^res_level), instead of being passed relative to the reference grid (ie at full resolution). If doing this misuse of the API on a tiled image, not all tiles that intersect the area of interest will be selected (in practice onl the ones at the top level).
If doing this misuse of the API on a single tiled image with current master, the resulting image is corrupted due to only a subset of needed codeblocks being decoded.
The proper use of the API should be that opj_set_decode_area() is always called with coordinates expressed in the reference grid and the user isn't required to manually set the image->comps[].factor values.
Note: a potential way of having a code compatible of current versions and future versions is to do something like https://trac.osgeo.org/gdal/changeset/39925
The text was updated successfully, but these errors were encountered: