Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

h2: Currently not honoring request limits #3709

Closed
dridi opened this issue Oct 5, 2021 · 2 comments
Closed

h2: Currently not honoring request limits #3709

dridi opened this issue Oct 5, 2021 · 2 comments

Comments

@dridi
Copy link
Member

dridi commented Oct 5, 2021

The following test case passes, it should fail. Attempting the same request with HTTP/1 definitely fails. I'm not sure how the correct test case should look like, my assumption is that stream 1 should be reset and the connection preserved.

varnishtest "h2 req limits"

server s1 {
        rxreq
        expect req.url ~ "(/0123456789){30}"
        expect req.http.foo ~ "bar(, 0123456789){1300}"
        txresp
} -start

varnish v1 -cliok "param.set feature +http2"
varnish v1 -cliok "param.set http_req_size 256b"
varnish v1 -cliok "param.set http_req_hdr_len 40b"

varnish v1 -vcl+backend { } -start

client c1 {
        stream 1 {
                txreq -url "${string,repeat,30,/0123456789}" \
                    -hdr foo {bar${string,repeat,1300,", 0123456789"}}
                rxresp
                expect resp.status == 200
        } -run
} -run
@walid-git
Copy link
Member

Regarding http_req_size parameter, currently it is not taken into account when using h2, instead the checked parameter is h2_max_frame_size + 9 (h2 header size), which defaults to 16k (the minimum)

@dridi
Copy link
Member Author

dridi commented Dec 8, 2022

We can't predict the length of headers until we decode them. They may be indexed in the session's HPACK dynamic table or compressed using a Huffman tree. I already checked left and right when I registered this issue that our HPACK decoder doesn't panic when it runs out of workspace.

Ideally, we should successfully decode an HPACK block even when the number of headers, headers lengths or the overall request size go over the configured limits. If we fail to decode an HPACK block we have to fail the entire h2 session, and what we should strive for is the failure of the offending stream and nothing else.

How should this stream fail? I did not research it last year, but I suspect it should simply be refused. RFC 7540 hopefully has an answer.

dridi added a commit that referenced this issue Apr 4, 2024
@dridi dridi closed this as completed in 7ccffe7 Apr 4, 2024
dridi added a commit that referenced this issue Apr 4, 2024
dridi added a commit that referenced this issue Apr 4, 2024
dridi added a commit that referenced this issue Apr 4, 2024
dridi added a commit that referenced this issue Apr 4, 2024
dridi added a commit that referenced this issue Apr 4, 2024
dridi added a commit that referenced this issue Apr 4, 2024
Refs #3709
Refs #3892

Conflicts:
	bin/varnishd/http2/cache_http2_hpack.c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants