fix(rust): harden ccxr_process_cc_data against excessive cc_count#2046
fix(rust): harden ccxr_process_cc_data against excessive cc_count#2046Harsh-Sahu43 wants to merge 2 commits intoCCExtractor:masterfrom
Conversation
cfsmp3
left a comment
There was a problem hiding this comment.
const MAX_CC_BLOCKS_PER_CALL: c_int = 10_000;
Where did you get this number from?
Do you have any sample in which this is an actual problem?
CCExtractor CI platform finished running the test files on linux. Below is a summary of the test results, when compared to test for commit d494286...:
Your PR breaks these cases:
Congratulations: Merging this PR would fix the following tests:
It seems that not all tests were passed completely. This is an indication that the output of some files is not as expected (but might be according to you). Check the result page for more info. |
CCExtractor CI platform finished running the test files on windows. Below is a summary of the test results, when compared to test for commit d494286...:
Your PR breaks these cases:
Congratulations: Merging this PR would fix the following tests:
It seems that not all tests were passed completely. This is an indication that the output of some files is not as expected (but might be according to you). Check the result page for more info. |
|
Good point — you’re right to question the constant. Per CEA-708/ATSC, CC bandwidth is very small (~9.6 kbps total), so under normal conditions only a few bytes of CC data appear per frame CTA-708 on Wikipedia. That said, this PR wasn’t based on a failing sample, and the 10_000 value is not spec-derived. The motivation was defensive hardening: ccxr_process_cc_data sits at the C→Rust boundary and allocates proportional to cc_count, which in some demuxer paths is derived as len / 3 without validation. I agree a fixed numeric limit is arbitrary. I can switch this to overflow / buffer-size guarding instead of a fixed limit. |
|
God, I'm definitely not going to be dealing with this AI slop. |
|
Understood — thanks for the feedback. I agree this change isn’t justified without a concrete failure case, and I’ll adjust my approach accordingly. I’ll focus future contributions on well-scoped, test-backed issues aligned with existing CCExtractor patterns. |
In raising this pull request, I confirm the following (please check boxes):
My familiarity with the project is as follows (check one):
{pull request content here}
Summary
Adds a defensive upper bound on
cc_countinccxr_process_cc_datato preventexcessive allocations or misuse at the FFI boundary.
Details
cc_countexceeds a sane limit