Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS ELB tries to make SSL/HTTPS connection to the nginx ingress controller, nginx shows error message "broken header" #6633

Closed
marianobilli opened this issue Dec 16, 2020 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@marianobilli
Copy link

NGINX Ingress controller version: v0.41.2

Kubernetes version (use kubectl version): v1.16.15

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Centos 7
  • Kernel (e.g. uname -a): 3.10.0-1127.19.1.el7.x86_64 Basic structure  #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: KOPS + YAML files for ingress
  • Others:

What happened: When ELB tries to make SSL/HTTPS connection to the nginx ingress controller, nginx shows error message "broken header"

NGINX Full Log

2020/12/16 08:26:59 [debug] 2622#2622: *2016609 event timer add: 3: 60000:667440126
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 reusable connection: 1
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 epoll add event: fd:3 op:1 ev:80002001
2020/12/16 08:26:59 [debug] 2622#2622: accept() not ready (11: Resource temporarily unavailable)
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 http check ssl handshake
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 http recv(): 108
2020/12/16 08:26:59 [error] 2622#2622: *2016609 broken header: "���pj���K�h���)�F�
                                                                                  b���C��H�oy�D$:�=5��</A
��kj98����g@32ED�(" while reading PROXY protocol, client: 100.120.0.0, server: 0.0.0.0:443
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 close http connection: 3
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 event timer del: 3: 667440126
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 reusable connection: 0
2020/12/16 08:26:59 [debug] 2622#2622: *2016609 free: 00005597202ED2C0, unused: 232

TCP Dump on node
Screenshot 2020-12-16 at 10 29 10

What you expected to happen:
ELB can terminate TLS and proxy to upstream nginx ingress controller port 443.

How to reproduce it:

Configure ingress controller with following parameters

  force-ssl-redirect: "true"
  use-proxy-protocol: "true"
  real-ip-header: "proxy_protocol"
  use-forwarded-headers: "true"

Configure ingress controller service with following annotations

...
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<Your Cert>"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "ssl". (also tried with https)
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    service.beta.kubernetes.io/aws-load-balancer-type: elb
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: TCP
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "22"
...
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
    - 172.16.0.0/12
  selector:
    app.kubernetes.io/name: ingress-internal
    app.kubernetes.io/part-of: ingress-internal
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https

Configure a simple echo service with a TLS certificate

spec:
  rules:
  - host: echo.yourdomain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: echo
          servicePort: 80
  tls:
  - hosts:
    - echo.yourdomain.com
    secretName: tls-secret

Anything else we need to know:
Solution attempt #1: I've applied solution of #2182 but it didnt worked and even used the old ciphers.

Solution attempt #2: I've tried using https for backend protocol

    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "https"

Solution attempt #4: I've tried adding this config in ingress config-map

  compute-full-forwarded-for: "true"`

Solution attempt #5: I've also seen messages that proxy protocol might be not correctly enabled in the ELB for my configured ports however I cheked and It is, but it does not solve the problem

ELB Describe

$ aws elb describe-load-balancers --load-balancer-name a2be11c4312994ca797325c833cb7d33
{
    "LoadBalancerDescriptions": [
        {
            "LoadBalancerName": "a2be11c4312994ca797325c833cb7d33",
            "DNSName": "internal-a2be11c4312994ca797325c833cb7d33-455321898.eu-west-1.elb.amazonaws.com",
            "CanonicalHostedZoneNameID": "Z32O12XQLNTSW2",
            "ListenerDescriptions": [
                {
                    "Listener": {
                        "Protocol": "HTTPS",
                        "LoadBalancerPort": 443,
                        "InstanceProtocol": "HTTPS",
                        "InstancePort": 30169,
                        "SSLCertificateId": "<my-cert>"
                    },
                    "PolicyNames": [
                        "AWSConsole-SSLNegotiationPolicy-a2be11c4312994ca797325c833cb7d33-1608051348402"
                    ]
                },
                {
                    "Listener": {
                        "Protocol": "TCP",
                        "LoadBalancerPort": 80,
                        "InstanceProtocol": "TCP",
                        "InstancePort": 31019
                    },
                    "PolicyNames": []
                }
            ],
            "Policies": {
                "AppCookieStickinessPolicies": [
                    {
                        "PolicyName": "AWSConsole-AppCookieStickinessPolicy-a2be11c4312994ca797325c833cb7d33-1608029525021",
                        "CookieName": "nginx"
                    }
                ],
                "LBCookieStickinessPolicies": [],
                "OtherPolicies": [
                    "AWSConsole-SSLNegotiationPolicy-a2be11c4312994ca797325c833cb7d33-1608051348402",
                    "AWSConsole-SSLNegotiationPolicy-a2be11c4312994ca797325c833cb7d33-1608029319200",
                    "k8s-proxyprotocol-enabled",
                    "ELBSecurityPolicy-2016-08"
                ]
            },
            "BackendServerDescriptions": [
                {
                    "InstancePort": 30169,
                    "PolicyNames": [
                        "k8s-proxyprotocol-enabled"
                    ]
                },
                {
                    "InstancePort": 31019,
                    "PolicyNames": [
                        "k8s-proxyprotocol-enabled"
                    ]
                }
            ],

/kind bug

@marianobilli marianobilli added the kind/bug Categorizes issue or PR as related to a bug. label Dec 16, 2020
@aledbf
Copy link
Member

aledbf commented Dec 16, 2020

@marianobilli please check the script that configures the static yaml manifest for AWS TLS termination in the ELB
https://github.com/kubernetes/ingress-nginx/blob/master/hack/generate-deploy-scripts.sh#L84-L120

  • you can't use aws-load-balancer-backend-protocol: "ssl"
  • you can't use aws-load-balancer-ssl-ports: "https". Only HTTP is possible.
  • this is a not required force-ssl-redirect: "true" with the redirect server block
  • if you enable use-proxy-protocol: "true" this makes no sense use-forwarded-headers: "true" (you need to trust only to the proxy-protocol information)

@aledbf
Copy link
Member

aledbf commented Dec 16, 2020

ELB can terminate TLS and proxy to upstream nginx ingress controller port 443.

That is no possible. For this to work, you need SSL certificated in ingress-nginx, i.e., secrets with a certificate for the host/s.
NGINX uses SNI to HTTPS routing.

@marianobilli
Copy link
Author

marianobilli commented Dec 17, 2020

@marianobilli
Copy link
Author

  • you can't use aws-load-balancer-ssl-ports: "https". Only HTTP is possible.
    Im not sure you you say this, in the link you shared it clearly shows aws-load-balancer-ssl-ports: "https"
    In fact the combination of that with aws-load-balancer-backend-protocol: "https" is what configures this in the Load balancer
    Screenshot 2020-12-17 at 13 12 12

@marianobilli
Copy link
Author

marianobilli commented Dec 17, 2020

ELB can terminate TLS and proxy to upstream nginx ingress controller port 443.

That is no possible. For this to work, you need SSL certificated in ingress-nginx, i.e., secrets with a certificate for the host/s.
NGINX uses SNI to HTTPS routing.

That is why I showed in the config that I setup a server certificate on the ingress with this config, Im not sure why you say it is not possible.

  - hosts:
    - echo.yourdomain.com
    secretName: tls-secret

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 17, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 16, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants