New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebased carl-2 branch on master to get a 20% reduction in encode time #1
Open
boxerab
wants to merge
442
commits into
cth103:carl-2
Choose a base branch
from
boxerab:dcpomatic
base: carl-2
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Fix several potential vulnerabilities
openjp2/j2k: Report error if all wanted components are not decoded.
The function is used to read both SPcod and SPcoc, so all comments should refer to both marker segments' parameter names.
Previously the multiple component transformation SGcod(C) and wavelet transformation SPcod(H)/SPcoc(E) parameter values were never checked, allowing for out of range values. The lack of validation allowed the bit stream provided in issue #1158 through. After this commit an error message points to the marker segments' parameters as being out of range. input/nonregression/edf_c2_20.jp2 contains an SPcod(H) value of 17, but according to Table A-20 of the specification only values 0 and 1 are valid. input/nonregression/issue826.jp2 contains a SGcod(B) value of 2, but according to Table A-17 of the specification only values 0 and 1 are valid. input/nonregression/oss-fuzz2785.jp2 contains a SGcod(B) value of 32, but it is likewise limited to 0 or 1. These test cases have been updated to consistently fail to parse the headers since they contain out of bounds values. This fixes issue #1210.
…ecode_real(): proper deal with a number of samples larger than 4 billion (refs #1151)
…mory in opj_t2_decode_packets(). Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=18979
There is currently a false positive ABI check failure between v2.3.1 and current. It disappears when removing the generated reports of v2.3.1 and recreating them. It is likely that some tooling has evolved since the initial v2.3.1 report generation.
abi-check.sh: fix false postive ABI error, and display output error log
…ur characters (#1196) Fixes #1068
pi.c: avoid integer overflow, resulting in later invalid access to memory in opj_t2_decode_packets()
…e beyond INT_MAX (fixes #1228)
opj_j2k_update_image_dimensions(): reject images whose coordinates are beyond INT_MAX (fixes #1228)
This was changed some time ago (https://google.github.io/oss-fuzz/getting-started/new-project-guide/) but the build didn't fail as there is a fallback mechanism. The main advantage of the new approach is that for libFuzzer this produces more performant binaries (as `$LIB_FUZZING_ENGINE` expands into `-fsanitize=fuzzer`, which links libFuzzer from the compiler-rt, allowing better optimization tricks). I'm also experimenting with dataflow (google/oss-fuzz#1632) on your project, and the dataflow config doesn't have a fallback (as it's a new configuration), therefore I'm proposing a change to migrate from `-lFuzzingEngine` to `$LIB_FUZZING_ENGINE`.
That could lead to later assertion failures. Fixes #1231 / CVE-2020-8112
opj_tcd_init_tile(): avoid integer overflow
Fixes #1233 libtiff 4.1 slightly modifies the way it generates files. So add the new expected md5sum. Not super elegant solution admitedly.
tests: add alternate checksums for libtiff 4.1
Add -IMF switch to opj_compress as well
Implement writing of IMF profiles
…ng error Prevent crashes like: opj_decompress -i 0722_5-1_2019.jp2 -o out.ppm -r 4 -t 0 where 0722_5-1_2019.jp2 is https://drive.google.com/file/d/1ZxOUZg2-FKjYwa257VFLMpTXRWxEoP0a/view?usp=sharing
opj_decompress: add sanity checks to avoid segfault in case of decoding error
…get_encoder_input_buffer_size()
This issues were found by cppcheck and coverity.
Fix warnings about signed/unsigned casts in pi.c
Thanks, I'll check it out! |
…ges in non-strict mode (fixes #1459)
opj_t2_skip_packet_data(): avoid out-of-bounds reads on truncated images in non-strict mode (fixes #1459)
This makes it possible to build j2k.c without warnings using the macOS 13 SDK. Calls to sprintf are replaced with snprintf, passing appropriate buffer sizes. It doesn’t appear that any of the changed uses of sprintf were actually unsafe, so no behavior change is expected aside from SDK compatibility. The macOS 13 SDK deprecates sprintf as it’s difficult to use safely. The deprecation warning message is visible when building C++, but it is not normally visible when building plain C code due to a quirk in how sprintf is declared in the SDK. However, the deprecation message is visible when building plain C under Address Sanitizer (-fsanitize=address). This discrepancy was discovered at https://crbug.com/1381706 and reported to Apple with a copy at https://openradar.appspot.com/FB11761475. The macOS 13 SDK is packaged in Xcode 14.1, released on 2022-11-01. This also affects the iOS 16 SDK and other 2022-era Apple OS SDKs packaged in Xcode 14.0, released on 2022-09-12. j2k.c is visible to the Chromium build via PDFium, and this change is needed to allow Chromium to move forward to the macOS 13 SDK. This change is limited to src/lib/openjp2. Other uses of sprintf were found throughout openjpeg.
openjp2/j2k: replace sprintf calls with snprintf
And fix strict-prototypes/missing-prototypes warnings.
CMake: error out on warnings for strict/missing prototypes.
…p2_decode()/get_tile() (fixes #570)
…p2_decode()/get_tile() + add unit test (fixes #570)
opj_jp2_read_header(): move setting color_space here instead in opj_jp2_decode()/get_tile() (fixes #570)
@boxerab did you find a problem with this? |
no, I just usually close PRs that haven't had any activity in a while. Reopening :) |
Great! I've been very busy lately ... but I'm looking forward to trying this! |
real world test - encoding sintel clip into dcp. On my system, encode frame rate went from 16 to 17 fps. So, not as good as the initial test, but nothing to sneeze at either. |
boxerab
force-pushed
the
dcpomatic
branch
3 times, most recently
from
June 10, 2023 15:39
cb2f6ca
to
f09ce35
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi Carl
Hope all is well. As an experiment, I tried adding all of your dcpomatic changes on top of current opj master, and I see a significant drop in encode time on my system. There have been some performance improvements since 2017.
Here is how I benchmarked (in a folder of TIFF images)
Cheers,
Aaron