-
-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSL_read failed: OpenSSL error #6481
Comments
What commit hash are you using? |
can you reproduce with a modified version of the http hello world example? |
can you also include some verbose output from curl Can you reproduce with |
@rkapsi also a big thank you for trying the SNAPSHOTS and provide feedback! |
Also do you try to enable |
Still trying to come up with a reproducer but I've narrowed it down to a commit.
So it's either d06990f or 2.0.0.Beta2. I do not enable Other random blurbs: I'm working on something like a HTTP cache that uses CompositeByteBufs to aggregate the HttpContent's bytes. The first curl request succeeds (passing thru the HttpContent bytes, aggregation happens on the side) and subsequent requests fail (this time returning the aggregated data). If I take out the caching layer then curl appears to be seemingly fine but Google Chrome and FF are still failing. It seems it has something to do with "timing" and/or the size of the ByteBuf that are being passed around. |
@rkapsi pro tip: you can drag drop images straight into the comment input. |
Not new ... but just trying to understand the use case. We may not properly support this flag yet (until PR #6365) Also the image link you provided in #6481 (comment) gives a 404. |
@Scottmitch uploaded the images straight into the ticket as per @johnou tip. |
@Scottmitch is #6488 related? |
Managed to reproduce it by constructing a CompositeByteBuf the way my custom HTTP response aggregation would do it. import javax.net.ssl.SSLEngine;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.buffer.CompositeByteBuf;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPipeline;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpServerCodec;
import io.netty.handler.codec.http.HttpVersion;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.SslContextBuilder;
import io.netty.handler.ssl.SslHandler;
import io.netty.handler.ssl.SslProvider;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import io.netty.util.ReferenceCountUtil;
public class Issue6481 {
private static final int PORT = 9443;
public static void main(String[] args) throws Exception {
EventLoopGroup group = new NioEventLoopGroup();
Channel channel = newHttpsServer(group, PORT);
System.out.println("\n>>> Try 'curl -v -k https://localhost:" + PORT + "' <<<\n");
}
private static Channel newHttpsServer(EventLoopGroup group, int port) throws Exception {
SelfSignedCertificate cert = new SelfSignedCertificate();
SslContext context = SslContextBuilder.forServer(cert.certificate(), cert.privateKey())
.sslProvider(SslProvider.OPENSSL)
.build();
ServerBootstrap bootstrap = new ServerBootstrap()
.channel(NioServerSocketChannel.class)
.group(group)
.option(ChannelOption.SO_REUSEADDR, true)
.childHandler(new HttpsServer(context));
return bootstrap.bind(PORT)
.syncUninterruptibly()
.channel();
}
private static class HttpsServer extends ChannelInitializer<Channel> {
private final SslContext context;
public HttpsServer(SslContext context) {
this.context = context;
}
@Override
protected void initChannel(Channel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
SSLEngine engine = context.newEngine(ch.alloc());
pipeline.addLast(new SslHandler(engine));
pipeline.addLast(new HttpServerCodec());
pipeline.addLast(new HttpRequestHandler());
}
private static class HttpRequestHandler extends ChannelInboundHandlerAdapter {
private static final FullHttpResponse RESPONSE = newResponse(ByteBufAllocator.DEFAULT);
static {
System.out.println(">>> RESPONSE: " + RESPONSE + "\n");
CompositeByteBuf content = (CompositeByteBuf)RESPONSE.content();
content.forEach((buf) -> {
System.out.println(">>> COMPONENT: " + buf + "\n");
});
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ReferenceCountUtil.release(msg);
if (msg instanceof HttpRequest) {
FullHttpResponse response = RESPONSE.retainedDuplicate();
System.out.println(">>> response: " + response + "\n");
ctx.writeAndFlush(response)
.addListener(ChannelFutureListener.CLOSE);
}
}
private static FullHttpResponse newResponse(ByteBufAllocator alloc) {
ByteBuf input = alloc.buffer();
input.writeBytes(new byte[40279]);
CompositeByteBuf content = alloc.compositeBuffer();
content.addComponent(true, input.readRetainedSlice(469));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(1024));
content.addComponent(true, input.readRetainedSlice(352));
content.addComponent(true, input.readRetainedSlice(16384));
content.addComponent(true, input.readRetainedSlice(8896));
content.addComponent(true, input.readRetainedSlice(2914));
// Make sure we've consumed all the bytes
if (content.readableBytes() != input.readerIndex()) {
throw new IllegalStateException(content.readableBytes() + " vs. " + input.readerIndex());
}
// This shouldn't release the input
if (input.release()) {
throw new IllegalStateException();
}
FullHttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK, content);
HttpHeaders headers = response.headers();
headers.set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.APPLICATION_OCTET_STREAM);
headers.set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes());
headers.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
return response;
}
}
}
}
|
@rkapsi thanks! |
@rkapsi you rock. |
Running this code against 4.1.8 and 1.1.33.Fork26 or 007048d and 2.0.0.Beta1 works. |
@rkapsi looking atm... this has for sure something to do with the |
That repro is representing the "HTTP response cache" scenario (doing my own |
thanks @rkapsi ... looking now. |
@Scottmitch unfortunately negative. The reproducer is succeeding but our server is still failing. In particular when I use a Browser. |
@Scottmitch it's working when I disable H2. I guess that is the difference between using curl and Browser. CompositeByteBuf inside CompositeByteBuf? I recall Netty's H2 code doing its own round of aggregation. |
…r size Motivation: When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously. Modifications: - SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each - Include the TLS header length in the overhead computation Result: Fixes netty#6481
@Scottmitch commit 677f5e2 appears to work. My tests on Friday were for 5b57fa0508babff04afccc82bdb5cef916b22270. |
🎉 |
@rkapsi - Thanks for verifying! Sorry about the multiple commits ... had to clean some stuff up along the way. |
…r size Motivation: When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously. Modifications: - SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each - Include the TLS header length in the overhead computation Result: Fixes #6481
…r size Motivation: When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously. Modifications: - SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each - Include the TLS header length in the overhead computation Result: Fixes netty#6481
…r size Motivation: When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously. Modifications: - SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each - Include the TLS header length in the overhead computation Result: Fixes netty#6481
…r size Motivation: When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously. Modifications: - SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each - Include the TLS header length in the overhead computation Result: Fixes netty#6481
…r size Motivation: When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously. Modifications: - SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each - Include the TLS header length in the overhead computation Result: Fixes netty#6481
Netty 4.1.9-SNAPSHOT with netty-tcnative 2.0.0.Beta6
I'll try to provide a repro but I'm seeing the following error in our application when I upgrade to Netty 4.1.9-SNAPSHOT and netty-tcnative 2.0.0.Beta6.
From what I can tell, the SSL hanshake completes, I receive the HTTP request, I respond with
ctx.writeAndFlush(FullHttpResponse)
, I see the response data in curl but it cuts off randomly and both ends report the following errors:An another observation is that curl seems to be a bit more stable. Some requests do succeed but Browsers (I tested Chrome and FF) on the other hand fail pretty reliably (every time). The problem goes away as soon as I switch back to Netty 4.1.8.
This resembles what we observed in the #6466 ticket (minus the Exception).
The text was updated successfully, but these errors were encountered: