-
Notifications
You must be signed in to change notification settings - Fork 75
Description
Logstash information:
Please include the following information:
- Logstash version (e.g.
bin/logstash --version)
Using bundled JDK: /usr/share/logstash/jdk
logstash 8.5.3
- Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)
Installed RPM package from https://artifacts.elastic.co/downloads/logstash/logstash-8.5-x86_64.rpm
- How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
As a systemd service
- How was the Logstash Plugin installed
It either came bundled with the RPM or we did a
/usr/share/logstash/bin/logstash-plugin install
OS version (uname -a if on a Unix-like system):
Linux zg-vsl-lring01 6.1.6-200.fc37.x86_64 #1 SMP PREEMPT_DYNAMIC Sat Jan 14 16:55:06 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
The short version of the problem ist that the TCP plugin handles TLS and the proxy protocol in the wrong order when proxy_protocol and ``ssl_enable` are both set.
If the connection originates from a HAProxy backend with both TLS and the PROXY-Protocol enabled, HAProxy first sends the Proxy information and then sets up a TLS connection.
However Logstash tries to anser the connection using the default OpenSSL::SSL::SSLSocket and then reads the proxy information after having successfully set up the connection. The relevant code is here:
The correct behaviour when handling the proxy protocol ist to first wait for the proxy information right after establishing the TCP connection and then handling any higher layer protocol:
The receiver MUST NOT start processing the connection before it receives a
complete and valid PROXY protocol header. This is particularly important for
protocols where the receiver is expected to speak first (eg: SMTP, FTP or SSH).
The receiver may apply a short timeout and decide to abort the connection if
the protocol header is not seen within a few seconds (at least 3 seconds to
cover a TCP retransmit).
Source: [http://www.haproxy.org/download/2.7/doc/proxy-protocol.txt](http://www.haproxy.org/download/2.7/doc/proxy-protocol.txt)
The problem has already been discussed in the forum: https://discourse.haproxy.org/t/haproxy-between-tcp-output-and-logstash/4800
Steps to reproduce:
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
- Define a TCP input on logstash using the following configuration
input {
tcp {
id => "tcp_input"
port => 6514
ecs_compatibility => v1
proxy_protocol => true
ssl_enable => false
ssl_verify=> false
ssl_cert => "/etc/logstash/certs/server.crt"
ssl_key => "/etc/logstash/certs/server.key"
}
}
- Configure the following backend on HAProxy to connect to the logstash input
backend syslog_sbc
mode tcp
server tcp_logstash localhost:6514 check send-proxy ssl ca-file tls-ca-bundle.pem
- Add a frontend to haproxy and try to connect.
Provide logs (if relevant):
We then get the following error because the TCP logstash plugin doesn't wait and read the proxy information but right away tries to read a TLS connection:
[2023-02-01T12:16:21,740][ERROR][logstash.inputs.tcp ][kafka_in][waf] /10.8.97.217:33764: closing due:
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 50524f585920544350342031302e382e39372e3231372031302e382e39372e31373320333337363420363531350d0a
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at java.lang.Thread.run(Thread.java:833) [?:?]