New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best value for nginx's ssl_buffer_size option? #63
Comments
1400 bytes (actually, it should probably be even a bit lower) is the recommended setting for interactive traffic where you want to avoid any unnecessary delays due to packet loss/jitter of fragments of the TLS record. However, packing each TLS record into dedicated packet does add some framing overhead.. and you probably want larger record sizes if you're streaming larger (and less latency sensitive) data. 4K is an in between value that's "reasonable" but not great for either case. For more, see: http://chimera.labs.oreilly.com/books/1230000000545/ch04.html#TLS_RECORD_SIZE |
Thank you @igrigorik, good answer. We'll look at our traffic and make a choice from there. |
Seems to be set to 16k by default: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size |
Setting it to 64k or larger will let nginx take advantage of the multibuffer feature in OpenSSL when using AES in CBC mode which has a significant impact on performance. https://software.intel.com/en-us/articles/improving-openssl-performance |
But how about "To minimize Time To First Byte it may be beneficial to use smaller values" here: |
It depends on your data. If you are serving larger files the gains from multibuffer on the block encryption are going to be significant and will dominate this latency. If you are serving mostly small files, then not so much. If you are using a different encryption mode like GCM, then buffer size will have less of an impact. |
@skynet @mechalas there isn't one perfect value. This is why other servers implement dynamic record size - e.g. http://chimera.labs.oreilly.com/books/1230000000545/ch04.html#tls_optimizations_at_google. |
Well, yeah. But nginx doesn't, which is why this was a question. :) |
@igrigorik - have you seen this study: http://conferences.sigcomm.org/co-next/2013/program/p303.pdf |
@skynet not sure I follow, what does Android have to do with this? There are two things to consider here: latency and throughput. Larger records improve throughput, smaller records run much lower risk of imposing latency penalties. Depending on the type of traffic you're serving you'll want to adjust these.. |
@igrigorik - I get the feeling that mobile devices running on cellular networks are not to be considered by any means identical in (their networking) nature to devices running on wired or WiFi networks, and therefore web servers should dynamically adjust for delivery to these devices. While everyone's looking at HTTP/2 as an "out of the box - performance improvement" ... conclusion of the aforementioned study sounds bitter: "In cellular networks, there are fundamental interactions across protocol layers that limit the performance of both SPDY as well as HTTP. As a result, there is no clear performance improvement with SPDY in cellular networks, in contrast to existing studies on wired and WiFi networks." I can open another issue specifically on cellular networks vs wired and WiFi networks, hoping that you, as a top Web authority and Master of web performance can put some light on this topic. I understand that engineers at Google have implement dynamic record size for "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." - Richard P. Feynman |
@skynet I've suggested the dynamic approach to nginx on many occasions. ATS and Haproxy are using that approach, others are looking into it as well, I'm hoping nginx will (eventually) follow. Re, mobile: as a broad statement, yes of course that's true, but at the end of the day you're back to optimizing same things: latency and bandwidth. In that regard nothing fundamentally changes, at least as far as record size is concerned. |
Thanks, appreciate the insights. We hope that @nginx will catch-up somehow and provide an even better HTTP2 implementation. |
But for now, what did you guys end up with using for the ssl_buffer_size ? |
For interactive traffic (e.g. HTML, CSS, etc), 4KB or less. |
And for a seafile reverse proxy? Files transferred are generally pretty big there, average 2 MB per file. |
Good article. I think I'll go with 8k then for this particular use-case scenario. We barely if at all suffer from congestion in the networks I use. |
@igrigorik, in your Velocity EU 2014 talk you recommend 4k as a reasonable value for nginx's
ssl_buffer_size
option (in the absence of dynamic record sizing).However in the nginx.conf in this repo, its set to 1400, to fit in one MTU.
Has your recommendation on this changed or is there some other reason for it to be 1400 here?
The text was updated successfully, but these errors were encountered: