[AutoPR- Security] Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968 [MEDIUM]#17251
Conversation
🔒 CVE Patch Review: CVE-2026-43968, CVE-2026-7790PR #17251 — [AutoPR- Security] Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968 [MEDIUM] Spec File Validation
Build Verification
🤖 AI Build Log Analysis
🧪 Test Log AnalysisNo test log found (package may not have a %check section). Patch Analysis
Detailed analysisComparison shows the PR applies the same three functional changes as upstream: (1) event_id/1 now checks for and rejects any of ["\r\n", "\r", "\n"] via binary:match(iolist_to_binary(ID), [<<"\r\n">>, <<"\r">>, <<"\n">>]); (2) event_name/1 applies the same newline validation to Name; (3) prefix_lines/2 now splits on all newline variants using binary:split(..., [<<"\r\n">>, <<"\r">>, <<"\n">>], [global]). The PR also adds the same tests: extra cases in event_test/0 for CRLF/CR/LF in data; event_error_test/0 to assert exceptions on invalid id/event containing newline sequences; and identity_test_/0 with helper functions do_identity_build_parse/1 and do_identity_result/1 to verify build/parse identity across a variety of events. The only differences are contextual: the file path is deps/cowlib/src/cow_sse.erl instead of src/cow_sse.erl (vendored dependency), the patch headers include a different commit ID and add Signed-off-by and Upstream-reference lines. No functional hunks are missing versus upstream. This change reduces the risk of newline injection in SSE event fields and normalizes data line handling per spec; potential regressions are limited to previously non-compliant inputs containing CR/CRLF in id/event now being rejected, which is intended. Overall risk is low.Core change: Both patches modify stream_chunked to call chunked_len(Data, Streamed, Acc, 0, 0) and rewrite all chunked_len clauses to include a new parameter D that counts hex digits with guards when D < 16, preventing overlong chunk-size fields. They also update the chunk extensions clause to accept the extra parameter and adjust skip_chunk_ext so that upon encountering "\r" or end-of-input it resumes chunked_len with D reset to 0. Final-chunk and normal-chunk clauses are likewise updated to accept/pass through the extra parameter (using _ where appropriate). Tests: Both add the same test asserting behavior with a maximal 16-hex-digit size ("FFFFFFFFFFFFFFFF\r\n") and that a 17-digit size ("10000000000000000\r\n") triggers an error. Differences: The PR applies the change to deps/cowlib/src/cow_http_te.erl (vendored cowlib) and carries different file indices and a packaging-style patch header, but the code hunks are line-for-line identical to upstream. No functional hunks are missing. Risk: Low; the change enforces a strict 16-digit limit per RFC expectations and should only affect pathological or malicious inputs. Resetting the digit counter after skipping extensions preserves correct behavior. Given the identical logic to upstream and inclusion of tests, the risk of regression is minimal. Raw diff (upstream vs PR)--- upstream
+++ pr
@@ -1,105 +1,117 @@
-From 6165fc40efa159ba1cceee7e7981e790acba5d9c Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
-Date: Mon, 11 May 2026 12:15:58 +0200
-Subject: [PATCH] Make building SSE events more closely match the spec
-
-Also add many more tests.
----
- src/cow_sse.erl | 64 ++++++++++++++++++++++++++++++++++++++++++++++---
- 1 file changed, 61 insertions(+), 3 deletions(-)
-
-diff --git a/src/cow_sse.erl b/src/cow_sse.erl
-index 81ceac2..0790413 100644
---- a/src/cow_sse.erl
-+++ b/src/cow_sse.erl
-@@ -301,7 +301,8 @@ event_comment(_) ->
- [].
-
- event_id(#{id := ID}) ->
-- nomatch = binary:match(iolist_to_binary(ID), <<"\n">>),
-+ nomatch = binary:match(iolist_to_binary(ID),
-+ [<<"\r\n">>, <<"\r">>, <<"\n">>]),
- [<<"id: ">>, ID, $\n];
- event_id(_) ->
- [].
-@@ -311,7 +312,8 @@ event_name(#{event := Name0}) ->
- is_atom(Name0) -> atom_to_binary(Name0, utf8);
- true -> iolist_to_binary(Name0)
- end,
-- nomatch = binary:match(Name, <<"\n">>),
-+ nomatch = binary:match(Name,
-+ [<<"\r\n">>, <<"\r">>, <<"\n">>]),
- [<<"event: ">>, Name, $\n];
- event_name(_) ->
- [].
-@@ -327,7 +329,8 @@ event_retry(_) ->
- [].
-
- prefix_lines(IoData, Prefix) ->
-- Lines = binary:split(iolist_to_binary(IoData), <<"\n">>, [global]),
-+ Lines = binary:split(iolist_to_binary(IoData),
-+ [<<"\r\n">>, <<"\r">>, <<"\n">>], [global]),
- [[Prefix, <<": ">>, Line, $\n] || Line <- Lines].
-
- -ifdef(TEST).
-@@ -345,5 +348,60 @@ event_test() ->
- _ = event(#{retry => 5000}),
- _ = event(#{event => "test", data => "test"}),
- _ = event(#{id => "test", event => "test", data => "test"}),
-+ _ = event(#{data => "test\r\ntest"}),
-+ _ = event(#{data => "test\rtest\r"}),
-+ _ = event(#{data => "test\ntest"}),
- ok.
+diff --git a/SPECS/rabbitmq-server/CVE-2026-43968.patch b/SPECS/rabbitmq-server/CVE-2026-43968.patch
+new file mode 100644
+index 00000000000..810523f10cb
+--- /dev/null
++++ b/SPECS/rabbitmq-server/CVE-2026-43968.patch
+@@ -0,0 +1,111 @@
++From df89c6e2e6924b0820467e61bea252486e9baacd Mon Sep 17 00:00:00 2001
++From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
++Date: Mon, 11 May 2026 12:15:58 +0200
++Subject: [PATCH] Make building SSE events more closely match the spec
+
-+event_error_test() ->
-+ {'EXIT', _} = (catch event(#{id => "test\n"})),
-+ {'EXIT', _} = (catch event(#{id => "test\r"})),
-+ {'EXIT', _} = (catch event(#{id => "test\r\n"})),
-+ {'EXIT', _} = (catch event(#{event => "test\n"})),
-+ {'EXIT', _} = (catch event(#{event => "test\r"})),
-+ {'EXIT', _} = (catch event(#{event => "test\r\n"})),
-+ ok.
++Also add many more tests.
+
-+identity_test_() ->
-+ Tests = [
-+ #{data => <<"hello">>},
-+ #{event => <<"update">>, data => <<"hello">>},
-+ #{id => <<"42">>, data => <<"hello">>},
-+ #{data => <<"a\nb">>},
-+ #{data => <<"multi\nline\ndata">>},
-+ #{event => <<"update">>, data => <<"hello">>},
-+ #{id => <<"abc">>, data => <<"x">>},
-+ #{comment => <<"c1">>, data => <<"d1">>, event => <<"e1">>, id => <<"i1">>},
-+ #{data => <<>>},
-+ #{data => <<"data with trailing newline\n">>},
-+ #{data => <<"\n">>},
-+ #{data => <<"\n\n">>},
-+ #{data => <<"">>, id => <<"1">>},
-+ #{data => <<"z">>},
-+ #{id => <<"17">>},
-+ #{data => << <<$a>> || _ <- lists:seq(1,200) >>},
-+ #{data => <<"こんにちは世界">>},
-+ #{retry => 30000, data => <<"reconnect">>}
-+ ],
-+ [{lists:flatten(io_lib:format("~0p", [V])),
-+ fun() -> true = do_identity_result(V) =:= do_identity_build_parse(V) end}
-+ || V <- Tests].
++Signed-off-by: Azure Linux Security Servicing Account <azurelinux-security@microsoft.com>
++Upstream-reference: https://github.com/ninenines/cowlib/commit/6165fc40efa159ba1cceee7e7981e790acba5d9c.patch
++---
++ deps/cowlib/src/cow_sse.erl | 64 +++++++++++++++++++++++++++++++++++--
++ 1 file changed, 61 insertions(+), 3 deletions(-)
+
-+do_identity_build_parse(Event) ->
-+ {event, Parsed, _} = parse(iolist_to_binary(event(Event)), init()),
-+ case Parsed of
-+ #{data := Data} -> Parsed#{data => iolist_to_binary(Data)};
-+ _ -> Parsed
-+ end.
++diff --git a/deps/cowlib/src/cow_sse.erl b/deps/cowlib/src/cow_sse.erl
++index 6e7081f..3503089 100644
++--- a/deps/cowlib/src/cow_sse.erl
+++++ b/deps/cowlib/src/cow_sse.erl
++@@ -301,7 +301,8 @@ event_comment(_) ->
++ [].
++
++ event_id(#{id := ID}) ->
++- nomatch = binary:match(iolist_to_binary(ID), <<"\n">>),
+++ nomatch = binary:match(iolist_to_binary(ID),
+++ [<<"\r\n">>, <<"\r">>, <<"\n">>]),
++ [<<"id: ">>, ID, $\n];
++ event_id(_) ->
++ [].
++@@ -311,7 +312,8 @@ event_name(#{event := Name0}) ->
++ is_atom(Name0) -> atom_to_binary(Name0, utf8);
++ true -> iolist_to_binary(Name0)
++ end,
++- nomatch = binary:match(Name, <<"\n">>),
+++ nomatch = binary:match(Name,
+++ [<<"\r\n">>, <<"\r">>, <<"\n">>]),
++ [<<"event: ">>, Name, $\n];
++ event_name(_) ->
++ [].
++@@ -327,7 +329,8 @@ event_retry(_) ->
++ [].
++
++ prefix_lines(IoData, Prefix) ->
++- Lines = binary:split(iolist_to_binary(IoData), <<"\n">>, [global]),
+++ Lines = binary:split(iolist_to_binary(IoData),
+++ [<<"\r\n">>, <<"\r">>, <<"\n">>], [global]),
++ [[Prefix, <<": ">>, Line, $\n] || Line <- Lines].
++
++ -ifdef(TEST).
++@@ -345,5 +348,60 @@ event_test() ->
++ _ = event(#{retry => 5000}),
++ _ = event(#{event => "test", data => "test"}),
++ _ = event(#{id => "test", event => "test", data => "test"}),
+++ _ = event(#{data => "test\r\ntest"}),
+++ _ = event(#{data => "test\rtest\r"}),
+++ _ = event(#{data => "test\ntest"}),
++ ok.
+++
+++event_error_test() ->
+++ {'EXIT', _} = (catch event(#{id => "test\n"})),
+++ {'EXIT', _} = (catch event(#{id => "test\r"})),
+++ {'EXIT', _} = (catch event(#{id => "test\r\n"})),
+++ {'EXIT', _} = (catch event(#{event => "test\n"})),
+++ {'EXIT', _} = (catch event(#{event => "test\r"})),
+++ {'EXIT', _} = (catch event(#{event => "test\r\n"})),
+++ ok.
+++
+++identity_test_() ->
+++ Tests = [
+++ #{data => <<"hello">>},
+++ #{event => <<"update">>, data => <<"hello">>},
+++ #{id => <<"42">>, data => <<"hello">>},
+++ #{data => <<"a\nb">>},
+++ #{data => <<"multi\nline\ndata">>},
+++ #{event => <<"update">>, data => <<"hello">>},
+++ #{id => <<"abc">>, data => <<"x">>},
+++ #{comment => <<"c1">>, data => <<"d1">>, event => <<"e1">>, id => <<"i1">>},
+++ #{data => <<>>},
+++ #{data => <<"data with trailing newline\n">>},
+++ #{data => <<"\n">>},
+++ #{data => <<"\n\n">>},
+++ #{data => <<"">>, id => <<"1">>},
+++ #{data => <<"z">>},
+++ #{id => <<"17">>},
+++ #{data => << <<$a>> || _ <- lists:seq(1,200) >>},
+++ #{data => <<"こんにちは世界">>},
+++ #{retry => 30000, data => <<"reconnect">>}
+++ ],
+++ [{lists:flatten(io_lib:format("~0p", [V])),
+++ fun() -> true = do_identity_result(V) =:= do_identity_build_parse(V) end}
+++ || V <- Tests].
+++
+++do_identity_build_parse(Event) ->
+++ {event, Parsed, _} = parse(iolist_to_binary(event(Event)), init()),
+++ case Parsed of
+++ #{data := Data} -> Parsed#{data => iolist_to_binary(Data)};
+++ _ -> Parsed
+++ end.
+++
+++do_identity_result(E=#{id := ID}) when map_size(E) =:= 1 ->
+++ #{
+++ last_event_id => ID
+++ };
+++do_identity_result(Event) ->
+++ #{
+++ event_type => maps:get(event, Event, <<"message">>),
+++ data => maps:get(data, Event, <<>>),
+++ last_event_id => maps:get(id, Event, <<>>)
+++ }.
++ -endif.
++--
++2.45.4
+
-+do_identity_result(E=#{id := ID}) when map_size(E) =:= 1 ->
-+ #{
-+ last_event_id => ID
-+ };
-+do_identity_result(Event) ->
-+ #{
-+ event_type => maps:get(event, Event, <<"message">>),
-+ data => maps:get(data, Event, <<>>),
-+ last_event_id => maps:get(id, Event, <<>>)
-+ }.
- -endif.
--- upstream
+++ pr
@@ -1,131 +1,142 @@
-From a4b8039ce8c93ab00867ef6b7e888822c09f4369 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
-Date: Mon, 11 May 2026 10:57:28 +0200
-Subject: [PATCH] Limit length of transfer-encoding: chunked chunks
-
----
- src/cow_http_te.erl | 78 +++++++++++++++++++++++----------------------
- 1 file changed, 40 insertions(+), 38 deletions(-)
-
-diff --git a/src/cow_http_te.erl b/src/cow_http_te.erl
-index 9b20ab8..ce4f7ff 100644
---- a/src/cow_http_te.erl
-+++ b/src/cow_http_te.erl
-@@ -138,7 +138,7 @@ stream_chunked(Data, State) ->
-
- %% New chunk.
- stream_chunked(Data = << C, _/bits >>, {0, Streamed}, Acc) when C =/= $\r ->
-- case chunked_len(Data, Streamed, Acc, 0) of
-+ case chunked_len(Data, Streamed, Acc, 0, 0) of
- {next, Rest, State, Acc2} ->
- stream_chunked(Rest, State, Acc2);
- {more, State, Acc2} ->
-@@ -174,54 +174,54 @@ stream_chunked(Data, {Rem, Streamed}, Acc) when Rem > 2 ->
- {more, << Acc/binary, Data/binary >>, Rem2, {Rem2, Streamed + DataSize}}
- end.
-
--chunked_len(<< $0, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16);
--chunked_len(<< $1, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 1);
--chunked_len(<< $2, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 2);
--chunked_len(<< $3, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 3);
--chunked_len(<< $4, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 4);
--chunked_len(<< $5, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 5);
--chunked_len(<< $6, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 6);
--chunked_len(<< $7, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 7);
--chunked_len(<< $8, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 8);
--chunked_len(<< $9, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 9);
--chunked_len(<< $A, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
--chunked_len(<< $B, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
--chunked_len(<< $C, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
--chunked_len(<< $D, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
--chunked_len(<< $E, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
--chunked_len(<< $F, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
--chunked_len(<< $a, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
--chunked_len(<< $b, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
--chunked_len(<< $c, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
--chunked_len(<< $d, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
--chunked_len(<< $e, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
--chunked_len(<< $f, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
-+chunked_len(<< $0, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16, D + 1);
-+chunked_len(<< $1, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 1, D + 1);
-+chunked_len(<< $2, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 2, D + 1);
-+chunked_len(<< $3, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 3, D + 1);
-+chunked_len(<< $4, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 4, D + 1);
-+chunked_len(<< $5, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 5, D + 1);
-+chunked_len(<< $6, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 6, D + 1);
-+chunked_len(<< $7, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 7, D + 1);
-+chunked_len(<< $8, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 8, D + 1);
-+chunked_len(<< $9, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 9, D + 1);
-+chunked_len(<< $A, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
-+chunked_len(<< $B, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
-+chunked_len(<< $C, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
-+chunked_len(<< $D, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
-+chunked_len(<< $E, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
-+chunked_len(<< $F, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
-+chunked_len(<< $a, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
-+chunked_len(<< $b, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
-+chunked_len(<< $c, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
-+chunked_len(<< $d, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
-+chunked_len(<< $e, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
-+chunked_len(<< $f, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
- %% Chunk extensions.
- %%
- %% Note that we currently skip the first character we encounter here,
- %% and not in the skip_chunk_ext function. If we latter implement
- %% chunk extensions (unlikely) we will need to change this clause too.
--chunked_len(<< C, R/bits >>, S, A, Len) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
-+chunked_len(<< C, R/bits >>, S, A, Len, _) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
- %% Final chunk.
- %%
- %% When trailers are following we simply return them as the Rest.
- %% Then the user code can decide to call the stream_trailers function
- %% to parse them. The user can therefore ignore trailers as necessary
- %% if they do not wish to handle them.
--chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0) -> {done, no_trailers, R};
--chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0) -> {done, A, no_trailers, R};
--chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0) when byte_size(R) > 2 -> {done, trailers, R};
--chunked_len(<< "\r\n", R/bits >>, _, A, 0) when byte_size(R) > 2 -> {done, A, trailers, R};
--chunked_len(_, _, _, 0) -> more;
-+chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0, _) -> {done, no_trailers, R};
-+chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0, _) -> {done, A, no_trailers, R};
-+chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0, _) when byte_size(R) > 2 -> {done, trailers, R};
-+chunked_len(<< "\r\n", R/bits >>, _, A, 0, _) when byte_size(R) > 2 -> {done, A, trailers, R};
-+chunked_len(_, _, _, 0, _) -> more;
- %% Normal chunk. Add 2 to Len for the trailing \r\n.
--chunked_len(<< "\r\n", R/bits >>, S, A, Len) -> {next, R, {Len + 2, S}, A};
--chunked_len(<<"\r">>, _, <<>>, _) -> more;
--chunked_len(<<"\r">>, S, A, _) -> {more, {0, S}, A};
--chunked_len(<<>>, _, <<>>, _) -> more;
--chunked_len(<<>>, S, A, _) -> {more, {0, S}, A}.
--
--skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len);
--skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len);
-+chunked_len(<< "\r\n", R/bits >>, S, A, Len, _) -> {next, R, {Len + 2, S}, A};
-+chunked_len(<<"\r">>, _, <<>>, _, _) -> more;
-+chunked_len(<<"\r">>, S, A, _, _) -> {more, {0, S}, A};
-+chunked_len(<<>>, _, <<>>, _, _) -> more;
-+chunked_len(<<>>, S, A, _, _) -> {more, {0, S}, A}.
+diff --git a/SPECS/rabbitmq-server/CVE-2026-7790.patch b/SPECS/rabbitmq-server/CVE-2026-7790.patch
+new file mode 100644
+index 00000000000..ffe3d9fd16b
+--- /dev/null
++++ b/SPECS/rabbitmq-server/CVE-2026-7790.patch
+@@ -0,0 +1,136 @@
++From d83b148d75a76db9a42b6c0dc50526a8d5b0ba28 Mon Sep 17 00:00:00 2001
++From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
++Date: Mon, 11 May 2026 10:57:28 +0200
++Subject: [PATCH] Limit length of transfer-encoding: chunked chunks
+
-+skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
-+skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
- %% We skip up to 128 characters of chunk extensions. The value
- %% is hardcoded: chunk extensions are very rarely seen in the
- %% wild and Cowboy doesn't do anything with them anyway.
-@@ -305,6 +305,7 @@ stream_chunked_n_passes_test() ->
- {more, <<"abc">>, 2, {2, 3}} = stream_chunked(<<"\n3\r\nabc">>, {1, 0}),
- {more, <<"abc">>, {1, 3}} = stream_chunked(<<"3\r\nabc\r">>, {0, 0}),
- {more, <<"abc">>, <<"123">>, {0, 3}} = stream_chunked(<<"3\r\nabc\r\n123">>, {0, 0}),
-+ {more, <<>>, 18446744073709551617, _} = stream_chunked(<<"FFFFFFFFFFFFFFFF\r\n">>, {0, 0}),
- ok.
-
- stream_chunked_dripfeed_test() ->
-@@ -339,7 +340,8 @@ stream_chunked_dripfeed2_test() ->
- stream_chunked_error_test_() ->
- Tests = [
- {<<>>, undefined},
-- {<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}}
-+ {<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}},
-+ {<<"10000000000000000\r\n">>, {0, 0}}
- ],
- [{lists:flatten(io_lib:format("value ~p state ~p", [V, S])),
- fun() -> {'EXIT', _} = (catch stream_chunked(V, S)) end}
++Signed-off-by: Azure Linux Security Servicing Account <azurelinux-security@microsoft.com>
++Upstream-reference: https://github.com/ninenines/cowlib/commit/a4b8039ce8c93ab00867ef6b7e888822c09f4369.patch
++---
++ deps/cowlib/src/cow_http_te.erl | 78 +++++++++++++++++----------------
++ 1 file changed, 40 insertions(+), 38 deletions(-)
++
++diff --git a/deps/cowlib/src/cow_http_te.erl b/deps/cowlib/src/cow_http_te.erl
++index e3473cf..c78b5db 100644
++--- a/deps/cowlib/src/cow_http_te.erl
+++++ b/deps/cowlib/src/cow_http_te.erl
++@@ -138,7 +138,7 @@ stream_chunked(Data, State) ->
++
++ %% New chunk.
++ stream_chunked(Data = << C, _/bits >>, {0, Streamed}, Acc) when C =/= $\r ->
++- case chunked_len(Data, Streamed, Acc, 0) of
+++ case chunked_len(Data, Streamed, Acc, 0, 0) of
++ {next, Rest, State, Acc2} ->
++ stream_chunked(Rest, State, Acc2);
++ {more, State, Acc2} ->
++@@ -174,54 +174,54 @@ stream_chunked(Data, {Rem, Streamed}, Acc) when Rem > 2 ->
++ {more, << Acc/binary, Data/binary >>, Rem2, {Rem2, Streamed + DataSize}}
++ end.
++
++-chunked_len(<< $0, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16);
++-chunked_len(<< $1, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 1);
++-chunked_len(<< $2, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 2);
++-chunked_len(<< $3, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 3);
++-chunked_len(<< $4, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 4);
++-chunked_len(<< $5, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 5);
++-chunked_len(<< $6, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 6);
++-chunked_len(<< $7, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 7);
++-chunked_len(<< $8, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 8);
++-chunked_len(<< $9, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 9);
++-chunked_len(<< $A, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
++-chunked_len(<< $B, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
++-chunked_len(<< $C, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
++-chunked_len(<< $D, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
++-chunked_len(<< $E, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
++-chunked_len(<< $F, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
++-chunked_len(<< $a, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
++-chunked_len(<< $b, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
++-chunked_len(<< $c, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
++-chunked_len(<< $d, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
++-chunked_len(<< $e, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
++-chunked_len(<< $f, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
+++chunked_len(<< $0, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16, D + 1);
+++chunked_len(<< $1, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 1, D + 1);
+++chunked_len(<< $2, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 2, D + 1);
+++chunked_len(<< $3, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 3, D + 1);
+++chunked_len(<< $4, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 4, D + 1);
+++chunked_len(<< $5, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 5, D + 1);
+++chunked_len(<< $6, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 6, D + 1);
+++chunked_len(<< $7, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 7, D + 1);
+++chunked_len(<< $8, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 8, D + 1);
+++chunked_len(<< $9, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 9, D + 1);
+++chunked_len(<< $A, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
+++chunked_len(<< $B, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
+++chunked_len(<< $C, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
+++chunked_len(<< $D, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
+++chunked_len(<< $E, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
+++chunked_len(<< $F, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
+++chunked_len(<< $a, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
+++chunked_len(<< $b, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
+++chunked_len(<< $c, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
+++chunked_len(<< $d, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
+++chunked_len(<< $e, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
+++chunked_len(<< $f, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
++ %% Chunk extensions.
++ %%
++ %% Note that we currently skip the first character we encounter here,
++ %% and not in the skip_chunk_ext function. If we latter implement
++ %% chunk extensions (unlikely) we will need to change this clause too.
++-chunked_len(<< C, R/bits >>, S, A, Len) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
+++chunked_len(<< C, R/bits >>, S, A, Len, _) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
++ %% Final chunk.
++ %%
++ %% When trailers are following we simply return them as the Rest.
++ %% Then the user code can decide to call the stream_trailers function
++ %% to parse them. The user can therefore ignore trailers as necessary
++ %% if they do not wish to handle them.
++-chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0) -> {done, no_trailers, R};
++-chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0) -> {done, A, no_trailers, R};
++-chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0) when byte_size(R) > 2 -> {done, trailers, R};
++-chunked_len(<< "\r\n", R/bits >>, _, A, 0) when byte_size(R) > 2 -> {done, A, trailers, R};
++-chunked_len(_, _, _, 0) -> more;
+++chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0, _) -> {done, no_trailers, R};
+++chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0, _) -> {done, A, no_trailers, R};
+++chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0, _) when byte_size(R) > 2 -> {done, trailers, R};
+++chunked_len(<< "\r\n", R/bits >>, _, A, 0, _) when byte_size(R) > 2 -> {done, A, trailers, R};
+++chunked_len(_, _, _, 0, _) -> more;
++ %% Normal chunk. Add 2 to Len for the trailing \r\n.
++-chunked_len(<< "\r\n", R/bits >>, S, A, Len) -> {next, R, {Len + 2, S}, A};
++-chunked_len(<<"\r">>, _, <<>>, _) -> more;
++-chunked_len(<<"\r">>, S, A, _) -> {more, {0, S}, A};
++-chunked_len(<<>>, _, <<>>, _) -> more;
++-chunked_len(<<>>, S, A, _) -> {more, {0, S}, A}.
++-
++-skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len);
++-skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len);
+++chunked_len(<< "\r\n", R/bits >>, S, A, Len, _) -> {next, R, {Len + 2, S}, A};
+++chunked_len(<<"\r">>, _, <<>>, _, _) -> more;
+++chunked_len(<<"\r">>, S, A, _, _) -> {more, {0, S}, A};
+++chunked_len(<<>>, _, <<>>, _, _) -> more;
+++chunked_len(<<>>, S, A, _, _) -> {more, {0, S}, A}.
+++
+++skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
+++skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
++ %% We skip up to 128 characters of chunk extensions. The value
++ %% is hardcoded: chunk extensions are very rarely seen in the
++ %% wild and Cowboy doesn't do anything with them anyway.
++@@ -305,6 +305,7 @@ stream_chunked_n_passes_test() ->
++ {more, <<"abc">>, 2, {2, 3}} = stream_chunked(<<"\n3\r\nabc">>, {1, 0}),
++ {more, <<"abc">>, {1, 3}} = stream_chunked(<<"3\r\nabc\r">>, {0, 0}),
++ {more, <<"abc">>, <<"123">>, {0, 3}} = stream_chunked(<<"3\r\nabc\r\n123">>, {0, 0}),
+++ {more, <<>>, 18446744073709551617, _} = stream_chunked(<<"FFFFFFFFFFFFFFFF\r\n">>, {0, 0}),
++ ok.
++
++ stream_chunked_dripfeed_test() ->
++@@ -339,7 +340,8 @@ stream_chunked_dripfeed2_test() ->
++ stream_chunked_error_test_() ->
++ Tests = [
++ {<<>>, undefined},
++- {<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}}
+++ {<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}},
+++ {<<"10000000000000000\r\n">>, {0, 0}}
++ ],
++ [{lists:flatten(io_lib:format("value ~p state ~p", [V, S])),
++ fun() -> {'EXIT', _} = (catch stream_chunked(V, S)) end}
++--
++2.45.4
++
Verdict❌ CHANGES REQUESTED — Please address the issues flagged above. |
Kanishk-Bansal
left a comment
There was a problem hiding this comment.
Patch Analysis (Both patch matches upstream, nothing actionable from AI test analysis.)
- Buddy Build
- patch applied during the build (check
rpm.log) - patch include an upstream reference
- PR has security tag
Auto Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968.
Autosec pipeline run -> https://dev.azure.com/mariner-org/mariner/_build/results?buildId=1118726&view=results
Merge Checklist
All boxes should be checked before merging the PR (just tick any boxes which don't apply to this PR)
*-staticsubpackages, etc.) have had theirReleasetag incremented../cgmanifest.json,./toolkit/scripts/toolchain/cgmanifest.json,.github/workflows/cgmanifest.json)./LICENSES-AND-NOTICES/SPECS/data/licenses.json,./LICENSES-AND-NOTICES/SPECS/LICENSES-MAP.md,./LICENSES-AND-NOTICES/SPECS/LICENSE-EXCEPTIONS.PHOTON)*.signatures.jsonfilessudo make go-tidy-allandsudo make go-test-coveragepassSummary
What does the PR accomplish, why was it needed?
Change Log
Does this affect the toolchain?
YES/NO
Associated issues
Links to CVEs
Test Methodology