Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support DNS resolution over TCP #185

Open
ngrigoriev opened this issue Jul 23, 2019 · 1 comment
Open

Support DNS resolution over TCP #185

ngrigoriev opened this issue Jul 23, 2019 · 1 comment
Labels

Comments

@ngrigoriev
Copy link

@ngrigoriev ngrigoriev commented Jul 23, 2019

Output of haproxy -vv and uname -a

HA-Proxy version 2.0.2 2019/07/16 - https://haproxy.org/
Build options :
  TARGET  = linux-glibc
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-format-truncation -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old
-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wno-implicit-fallthrough -Wno-stringop-overflow -Wno-cast-function-type -Wty
pe-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ZLIB=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -ST
ATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +D
L +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1c  28 May 2019
Running on OpenSSL version : OpenSSL 1.1.1c  28 May 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE2 version : 10.33 2019-04-16
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with the Prometheus exporter as a service

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE     mux=H2
              h2 : mode=HTTP       side=FE        mux=H2
       <default> : mode=HTX        side=FE|BE     mux=H1
       <default> : mode=TCP|HTTP   side=FE|BE     mux=PASS

Available services :
        prometheus-exporter

Available filters :
        [SPOE] spoe
        [COMP] compression
        [CACHE] cache
        [TRACE] trace
# uname -a
Linux haproxy-57759f894d-gtwrq 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 Linux

What should haproxy do differently? Which functionality do you think we should add?

HAProxy has its own DNS resolver that only supports UDP. This, effectively, limits the maximum number of upstreams that can be configured using server_template option. Supporting TCP and, possibly, allowing to force TCP usage for the SRV discovery would eliminate this restriction.

What are you trying to do?

I am using HAProxy as service proxy inside of Kubernetes cluster against a headless service. Headless service is discovered using DNS, via its SRV records.

I have made a painful discovery yesterday that without increasing the "accepted_payload_size" value I cannot survive even with 7 upstreams. But even increasing this value to its maximum (8192) only gets me maximum few dozens of upstreams - at most. It is quite typical in Kubernetes environment to have dozens, hundreds and even thousands of pods.

ref: https://discourse.haproxy.org/t/haproxy-k8s-and-server-template-unstable-fluctuating-server-list/4076

@jbrehm

This comment has been minimized.

Copy link

@jbrehm jbrehm commented Jul 24, 2019

Very good diagnosis. We are also using Kubernetes, but using a NodePort service and an annotation that keeps a route53 DNS record updated with current backend worker nodes IP addresses. Once we scaled beyond 512 bytes (around 34 nodes) all of our backend servers configured with server-template and those DNS lookups started failing with the 'MAINT' status. It took a while to finally figure out what was happening.

Setting "accepted_payload_size 8192" in the resolvers config section is able to get it to work again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.