Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Http -> https redirect on TCP ELB terminating ssl, results in a 308 redirect loop. #2724

Closed
JimtotheB opened this issue Jun 29, 2018 · 71 comments · Fixed by #5374
Closed

Http -> https redirect on TCP ELB terminating ssl, results in a 308 redirect loop. #2724

JimtotheB opened this issue Jun 29, 2018 · 71 comments · Fixed by #5374

Comments

@JimtotheB
Copy link

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):

Issues #2000 an #1957 touch on this, with #1957 suggesting its was fixed. Searched 308, redirect, TCP, aws, elb, proxy etc.

NGINX Ingress controller version:
v0.16.2

Kubernetes version (use kubectl version):
v1.9.6

Environment:
AWS

What happened:

With this ingress that creates an ELB handling TLS termination.

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "#snip"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: http
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: ingress-nginx
  type: LoadBalancer

And these nginx settings asking for force-ssl-redirect

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
data:
  client-body-buffer-size: 32M
  hsts: "true"
  proxy-body-size: 1G
  proxy-buffering: "off"
  proxy-read-timeout: "600"
  proxy-send-timeout: "600"
  server-tokens: "false"
  force-ssl-redirect: "true"
  upstream-keepalive-connections: "50"
  use-proxy-protocol: "true"

requesting http://example.com will result in a 308 redirect loop. with force-ssl-redirect: false it works fine, but no http -> https redirect.

What you expected to happen:

I expect http://example.com to be redirected to https://example.com by the ingress controller.

How to reproduce it (as minimally and precisely as possible):

Spin up an example with the settings above, default backend, ACM cert and dummy Ingress for it to attach to. then attempt to request the http:// emdpoint.

@lancecotingkeh
Copy link

Hi folks, still experiencing this issue in 0.17.1 it seems

@Tenzer
Copy link

Tenzer commented Aug 7, 2018

I am seeing this issue as well. Could the destination port provided via the PROXY protocol not be used to determine if the incoming connection was made over HTTP or HTTPS?

When using the L7/HTTP ELB the X-Forwarded-Proto header is used to determine this: https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl#L253-L257.

@JohnFlyIII
Copy link

JohnFlyIII commented Aug 17, 2018

Same situation SSL terminating at ELB using ACM cert.

After thinking about this over the weekend I got it to work this morning.
I had my ELB setup to the wrong protocol, I had it set to TCP and SSL... It needs to be HTTP and HTTPS.

So...

Make sure the ELB is set to load balance the HTTP and HTTPS protocols, not SSL or TCP, etc...

Double check that both HTTP and HTTPS balance to the same internal port.
Set your SSL Cert on your HTTPS 443 Load Balancer Port

in your nginx configmap:

proxy proto : false
force-ssl-redirect: true

Example below:

"apiVersion": "v1",
"metadata": {
"name": "nginx-configuration",
"namespace": "ingress-nginx",
"selfLink": "/api/v1/namespaces/ingress-nginx/configmaps/nginx-configuration",
"uid": "c8eddbd7-a17a-11e8-a3e5-12ca8f067004",
"resourceVersion": "1265268",
"creationTimestamp": "2018-08-16T17:35:36Z",
"labels": {
"app": "ingress-nginx"
}
},
"data": {
"client-body-buffer-size": "32M",
"force-ssl-redirect": "true",
"hsts": "true",
"proxy-body-size": "1G",
"proxy-buffering": "off",
"proxy-read-timeout": "600",
"proxy-send-timeout": "600",
"redirect-to-https": "true",
"server-tokens": "false",
"ssl-redirect": "true",
"upstream-keepalive-connections": "50",
"use-proxy-protocol": "false"
}
}

@lancecotingkeh
Copy link

thank you @boxofnotgoodery this works for us!

@RealShanHuang
Copy link

RealShanHuang commented Sep 7, 2018

Let me share my settings that finally work. "redirect-to-https": "true", does not seem to be needed. Thank you @boxofnotgoodery .

In ConfigMap:

data:
  client-body-buffer-size: 32M
  hsts: "true"
  proxy-body-size: 1G
  proxy-buffering: "off"
  proxy-read-timeout: "600"
  proxy-send-timeout: "600"
  server-tokens: "false"
  ssl-redirect: "true"
  force-ssl-redirect: "true"
  upstream-keepalive-connections: "50"
  use-proxy-protocol: "false"

Also in Service:

  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

@Tenzer
Copy link

Tenzer commented Sep 8, 2018

I think the suggestion above misses the point a bit (at least for my use case) as it will mean you cannot use web sockets and gRPC when the ELB runs in HTTP mode. It will have to run in TCP/SSL mode (ideally with the proxy protocol) in order for those features to be supported.

@dthomason
Copy link

I agree with Tenzer, I am also trying to enable force ssl redirect while using websockets and get a 308 redirect loop when enabled. Currently I cannot enable ssl redirect until there is a fix for this. If anyone has a suggestion please let me know.

@okgolove
Copy link

okgolove commented Oct 3, 2018

Same problem. I have to use WebSocket, so I'm not able to use HTTP and HTTPS in ELB ports, only TCP.

@bhegazy
Copy link

bhegazy commented Oct 4, 2018

This also happens to me on 0.19 when we have an ELB on TCP to use with web-sockets it results in redirect loop similar to @Tenzer @dthomason and @okgolove .

@okgolove
Copy link

okgolove commented Oct 4, 2018

I've fixed it using this answer:
https://stackoverflow.com/a/51936678/2956620

Looks like a crutch, but it works :)

@amihura
Copy link

amihura commented Nov 13, 2018

Same issue for me

@hmarcelodn
Copy link

Same issue is happening

@Tenzer
Copy link

Tenzer commented Dec 6, 2018

I gave the workaround in the Stack Overflow post a try and got it working as well. I'll try and point out the differences you have to make for the workaround to make it a bit more clear and why it works.

I'll start with the configuration of the service/load balancer/ELB:

---
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    # Enable PROXY protocol
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    # Specify SSL certificate to use
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:[...]
    # Use SSL on the HTTPS port
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      # We are using a target port of 8080 here instead of 80, this is to work around
      # https://github.com/kubernetes/ingress-nginx/issues/2724
      # This goes together with the `http-snippet` in the ConfigMap.
      targetPort: 8080
    - name: https
      port: 443
      targetPort: http

Three things to point out here:

  1. We enable the PROXY protocol on the ELB by setting the service.beta.kubernetes.io/aws-load-balancer-proxy-protocol annotation.
  2. The ELB is configured to use the SSL certificate on port 443 (HTTPS).
  3. The non-SSL/HTTPS traffic is sent to port 8080 on Nginx instead of the default port 80. This allows us to differentiate between the traffic which was sent encrypted and the traffic which wasn't in Nginx, as the PROXY protocol doesn't allow the ELB to pass a X-Forwarded-Proto header with the requests.

In the nginx-configuration ConfigMap I ended up with this:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  use-proxy-protocol: "true"

  # Work around for HTTP->HTTPS redirect not working when using the PROXY protocol:
  # https://github.com/kubernetes/ingress-nginx/issues/2724
  # It works by getting Nginx to listen on port 8080 on top of the standard 80 and 443,
  # and making any requests sent to port 8080 be reponded do by this code, rather than
  # the normal port 80 handling.
  ssl-redirect: "false"
  http-snippet: |
    map true $pass_access_scheme {
      default "https";
    }
    map true $pass_port {
      default 443;
    }

    server {
      listen 8080 proxy_protocol;
      return 308 https://$host$request_uri;
    }

This does the following:

  1. Enables the PROXY protocol with the use-proxy-protocol line.
  2. Turns off the HTTP->HTTPS redirect globally. This is because Nginx otherwise thinks all traffic received on port 80 is made over HTTP, and will then try to redirect it to HTTPS. This is what causes the redirect loop.
  3. The http-snippet contains two bits of Nginx configuration. The map statement is used to overrule the value that $pass_access_scheme otherwise get set here:
    # trust http_x_forwarded_proto headers correctly indicate ssl offloading
    map $http_x_forwarded_proto $pass_access_scheme {
    default $http_x_forwarded_proto;
    '' $scheme;
    }

    This was necessary for me as some applications behind the ingress controller needed to know if they were served over HTTP or HTTPS - either so they could enforce being served over HTTPS, or in order to be able to generate correct URLs for links and assets.
    The map configured in the http-snippet is injected further down in the Nginx configuration, and tricks Nginx into thinking all connections were made over HTTPS.
    The server directive sets up Nginx to listen on port 8080 as well as port 80, and any request made to that port will receive a 308 (Permanent Redirect) response, forwarding them to the HTTPS version of the URL.

An extra thing I changed which wasn't mentioned on Stack Overflow, was that I changed the ports section of the Deployment from this:

ports:
  - name: http
    containerPort: 80
  - name: https
    containerPort: 443

to this:

ports:
  - name: http
    containerPort: 80
  - name: http-workaround
    containerPort: 8080

This is in order to make the ports the Kubernetes pod accepts connections on to match what we need.


I hope this is useful for other people. I don't know if it would be worth adding to the documentation somewhere or if it could inspire a more slick workaround.

@kevguy
Copy link

kevguy commented Dec 6, 2018

@Tenzer Thank you so much. Been stuck in this for a couple days, your solution works like a charm.

@Tenzer
Copy link

Tenzer commented Dec 7, 2018

I have just updated my previous comment and added the following to the http-snippet:

map true $pass_port {
  default 443;
}

I found this to be necessary in order for Jenkins to not complain about the port number it received in the X-Forwarded-Port not matching what the client was seeing, so just a minor thing.

@thealmightygrant
Copy link

Thanks a ton @Tenzer. This is a great solution for anyone using a TCP/SSL load balancer that still wants HTTP redirects. Our data scientists will be very happy to have Jupyterhub, which requires web sockets, up and running in the k8s cluster.

@miles-
Copy link

miles- commented Feb 7, 2019

@trueinviso Thank you! I have a similar setup (terminating TLS at the load balancer) and reverting from 0.22.0 to 0.21.0 fixed the infinite redirect loop for me.

@Tenzer
Copy link

Tenzer commented Feb 8, 2019

@trueinviso If you aren't using the PROXY protocol (with use-proxy-protocol: "true" in the config map) then your issue isn't related to what this GitHub issue is about.

@trueinviso
Copy link

trueinviso commented Feb 8, 2019

@Tenzer I'll move it to the other issue referencing the 308 redirect.

@trjate
Copy link

trjate commented Jun 23, 2019

@Tenzer Thank you so much. Been stuck in this for a couple days, your solution works like a charm.

Same here. Thanks @Tenzer!

@assafShapira
Copy link

@Tenzer 's workaround worked like magic for us, till we've tried to upgrade nginx image to version 0.22.0
I belive some work regrading "use-forwarded-headers" settings was merged to that version, and might be the cause.
I'll appreciate any help with that, as this is blocking us from upgrading...

@Tenzer
Copy link

Tenzer commented Jul 15, 2019

@assafShapira could you please provide some more information on what behaviour you are seeing with version 0.22.0? I'm using the workaround with an Nginx ingress controller on version 0.22.0 and I'm not aware of any problems with it.

@assafShapira
Copy link

sorry, worng version number.
it works well on 0.22.0 and on 0.23.0
it brakes on 0.24.0
I can also confirm it's not working on 0.24.1 and 0.25.0

@Tenzer
Copy link

Tenzer commented Jul 15, 2019

Okay, but the question of how it breaks is still left.

@assafShapira
Copy link

I'm getting into 308 loop
and in the browser, I'm getting ERR_TOO_MANY_REDIRECTS

@Tenzer
Copy link

Tenzer commented Jul 16, 2019

I've tried to upgrade an Nginx ingress controller to both version 0.24.0, 0.24.1 and 0.25.0 and from what I can see the problem is the X-Forwarded-Port and X-Forwarded-Proto are respectively set to "80" and "http", meaning the backend server may think (if it checks these) that the request was served over HTTP, when it actually reached the AWS ELB over HTTPS. This is what the following code block in the original work around was fixing:

  http-snippet: |
    map true $pass_access_scheme {
      default "https";
    }
    map true $pass_port {
      default 443;
    }

This work around doesn't work any more as the two maps aren't used further down in the generated config file. Each server {} block instead has a list of variables which are set based on variables provided by Nginx:

set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;

These are then used inside the location / {} block to set the headers sent to the backend:
{{ $proxySetHeader }} X-Forwarded-Port $pass_port;
{{ $proxySetHeader }} X-Forwarded-Proto $pass_access_scheme;

I've tried various ways to attempt to change the value of these headers so the port number instead if 443 and protocol is "https" but to no avail:

  • location-snippet in the ConfigMap: this is the most promising but I've only been able to append values onto the headers, not replace them.
  • server-snippet in the ConfigMap.
  • http-snippet in the ConfigMap.
  • nginx.ingress.kubernetes.io/server-snippet in the Ingress definition.

I have both tried to set the $pass_port and $pass_access_scheme variables to other values, used proxy_set_header to send other values to the backend and even more_set_input_headers from OpenResty: https://github.com/openresty/headers-more-nginx-module. None of them seems to have any effect on the passed headers which seem odd to me.

In a test Nginx instance I tried to create a minimal configuration to create a test case for this, but I haven't been able to reproduce it:

events {}

http {
    server {
        listen 8081;

        set $pass_access_scheme $scheme;

        # Position 1
        location / {
            # Position 2
            set $pass_access_scheme https;
            proxy_set_header X-Forwarded-Proto $pass_access_scheme;
            proxy_pass http://127.0.0.1:8080;
            # Position 3
        }
        # Position 4
    }
}

Regardless of which of the four positions noted by comments I can put in set $pass_access_scheme https; and the backend server will get X-Forwarded-Proto: https sent as a request header.

As long as we haven't got a way to overrule the X-Forwarded-Port and X-Forwarded-Proto headers sent to the backend, I'm not sure the workaround will work in ingress-nginx versions newer than 0.23.0 :(

I'd be very interested to hear if anybody else can come up with a work around for changing the values of those headers.

@walkafwalka
Copy link
Contributor

walkafwalka commented Mar 19, 2020

@KongZ Right, I am using an NLB. However, the NGINX controller is still a L7 reverse proxy that forwards its own X-Forwarded-* headers. Here is a snippet from my NGINX:

                        set $pass_server_port    $server_port;

                        set $best_http_host      $http_host;
                        set $pass_port           $pass_server_port;
...
                        proxy_set_header X-Forwarded-Port       $pass_port;

And because we are serving HTTPS over port 8000, it is forwarding the port as 8000 instead of 443.

@dardanos
Copy link

@ssh2n every deployment you do with helm should have the annotation.

@ssh2n
Copy link

ssh2n commented Mar 20, 2020

Thanks @dardanos, that was a bit confusing, so I switched back to the classic L7 setup :)

@mjooz
Copy link

mjooz commented Jun 25, 2020

@walkafwalka, ran into the same issue as you with apps which depend on X-Forwarded-Port. Solution below sets proxy_port instead of server_port which comes by default. In my case Jenkins with Keycloak redirection had port 8000. This solved it:

location-snippet: |
  set $pass_server_port    $proxy_port;
server-snippet: |
  listen 8000;
  if ( $server_port = 80 ) {
    return 308 https://$host$request_uri;
  }
ssl-redirect: "false"

@abjrcode
Copy link

I came across this although I am NOT using service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*".
I have an NLB listening on HTTPS & HTTP which forwards requests as HTTP to NGINX which I have configured in turn to forward all traffic to port http (80).
My ingress is configured with "nginx.ingress.kubernetes.io/force-ssl-redirect": "true" for SSL redirection and I am getting stuck in a redirect loop.

The issue was closed without recommending what workaround to apply in which context.
It also doesn't mention whether or how it will be addressed without a workaround.

For my specific case, I assume that because I am NOT using service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*", NGINX keeps thinking it should redirect. But, even when I do configure service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" it still gets stuck in a redirect.

The nginx.com website documents an annotation that I haven't seen mentioned elsewhere, namely nginx.org/redirect-to-https and even with that, things didn't work for me.

Also having service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" for my NLB doesn't seem to enable ProxyProtocol v2 on the listeners but I haven't tested it with an ELB.

So in total I have two issues:

  • Redirect loop
  • Proxy protocol "enablement" on NLB

My configuration looks like this using Helm charts:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: whatever
  namespace: default
spec:
  chart:
    repository: https://kubernetes.github.io/ingress-nginx/
    name: ingress-nginx
    version: 2.11.1
  values:
    config:
      proxy-real-ip-cidr:
        - "10.2.0.0/20"
    controller:
      service:
        targetPorts:
          https: http
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<AWS_ARN>"
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

and the ingress object itself:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: whatever
  namespace: default
spec:
  rules:
    - host: <CUSTOM_HOST>
      http:
        paths:
          - path: /
            backend:
              serviceName: <WHATEVER>
              servicePort: 80

Appreciate kindly your advise.

@KongZ
Copy link

KongZ commented Jul 20, 2020

@abjrcode It is on my answer. It is the complete solution. Just configure ingress-nginx value file and your app ingress according to my coment.

#2724 (comment)

@abjrcode
Copy link

Thank you @KongZ for your suggestion. I will provide some more guidance for people coming across this and more options as I had a chance to take a thorough look at the code.

There are two choices for load balancers, at least when it comes to AWS.
I am assuming you want to terminate TLS at the load balancer level and we're dealing strictly with HTTPS & HTTP. If you are interested in TCP, UDP then please check this insightful comment on this very issue.

ELB

ELB (although Classis and will be completely deprecated at some point), probably for historical reasons, actually forwards the X-Forwarded-* headers.

The NGINX controller actually supports and can do redirection based on those headers. Here's how your configuration would look like with Helm:

  • The controller part
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: <RELEASE_NAME>
  namespace: <NAMESPACE>
spec:
  chart:
    repository: https://kubernetes.github.io/ingress-nginx/
    name: ingress-nginx
    version: 2.11.1
  values:
    config:
      ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
      use-forwarded-headers: "true" # NGINX will now decide whether it will do redirection based on these headers
    controller:
      service:
        targetPorts:
          https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          service.beta.kubernetes.io/aws-load-balancer-type: "elb"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
  • A sample ingress based on this controller definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: <INGRESS_NAME>
  namespace: <NAMESPACE>
spec:
  rules:
    - host: <CUSTOM_HOST>
      http:
        paths:
          - path: /
            backend:
              serviceName: <WHATEVER>
              servicePort: <SOME_PORT>

NLB

There are two choices when it comes to NLBs. Unfortunately, at least from my point of view, the preferred option isn't available at the time of this writing because of this open issue

My preferred option (Not possible until this is resolved)

  • The controller part
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: <RELEASE_NAME>
  namespace: <NAMESPACE>
spec:
  chart:
    repository: https://kubernetes.github.io/ingress-nginx/
    name: ingress-nginx
    version: 2.11.1
  values:
    config:
      ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
      use-proxy-protocol: "true" # NGINX will now decide whether it will do redirection based on these headers
    controller:
      service:
        targetPorts:
          https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
  • An example ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: <INGRESS_NAME>
  namespace: <NAMESPACE>
spec:
  rules:
    - host: <CUSTOM_HOST>
      http:
        paths:
          - path: /
            backend:
              serviceName: <WHATEVER>
              servicePort: <SOME_PORT>

Workaround by manipulating ports

Please check @KongZ comment on this issue.

@ismailyenigul
Copy link

Thanks @KongZ it works fine with NLB
Here is the changes I did for the one who does not use charm.
I deployed ingress-nginx from https://github.com/kubernetes/ingress-nginx/blob/controller-v0.34.1/deploy/static/provider/aws/deploy.yaml

  1. Edit Configmap with
    kubectl edit configmaps -n ingress-nginx ingress-nginx-controller
    Add the following lines (Note: data section does not exist by default)
data:
  server-snippet: |
    listen 8000;
  ssl-redirect: "false"

Complete configmap as a reference:

apiVersion: v1
data:
  server-snippet: |
    listen 8000;
  ssl-redirect: "false"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":null,"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/i
nstance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.34.1","helm.sh/chart":"
ingress-nginx-2.11.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
  creationTimestamp: "2020-08-03T17:29:25Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 0.34.1
    helm.sh/chart: ingress-nginx-2.11.1
  name: ingress-nginx-controller
  namespace: ingress-nginx

  1. Edit ingress-nginx deployment

kubectl edit deployments -n ingress-nginx ingress-nginx-controller

Add the following lines in ports: section

 - containerPort: 8000
     name: special
     protocol: TCP

More lines from deployments.

livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8000
          name: special
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP

When you save&exit deployments, it will create new ingress-nginx pod.

finally, add the following annotations lines into your app ingress

nginx.ingress.kubernetes.io/server-snippet: |
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }

Complete app ingress yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apple-ingress
  namespace: apple
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/server-snippet: |
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
spec:
  rules:
  - host: apple.mydomain.com
    http:
      paths:
        - path: /
          backend:
            serviceName: apple-service
            servicePort: 5678
  

then update it kubectl apply -f ingress-apple.yml

And let's test it

$ curl -I http://apple.mydomain.com
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:47:59 GMT
Content-Type: text/html
Content-Length: 171
Connection: keep-alive
Location: https://apple.mydomain.com/



$  curl -I https://apple.mydomain.com
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:48:20 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 15
Connection: keep-alive
X-App-Name: http-echo
X-App-Version: 0.2.3

@dohoangkhiem
Copy link
Contributor

dohoangkhiem commented Feb 23, 2021

@mjooz

@walkafwalka, ran into the same issue as you with apps which depend on X-Forwarded-Port. Solution below sets proxy_port instead of server_port which comes by default. In my case Jenkins with Keycloak redirection had port 8000. This solved it:

location-snippet: |
  set $pass_server_port    $proxy_port;
server-snippet: |
  listen 8000;
  if ( $server_port = 80 ) {
    return 308 https://$host$request_uri;
  }
ssl-redirect: "false"

I ran into the same problem, it works for most of services except Keycloak, tried adding

location-snippet: |
  set $pass_server_port    $proxy_port;

to the ingress-nginx configmap as you suggested but still not work, any advice?

juandiegopalomino added a commit to run-x/opta that referenced this issue Feb 25, 2021
Following this solution: kubernetes/ingress-nginx#2724 (comment)

Tested it with both no https and with https.
ALL SERVICES WILL REDIRECT PUBLIC TRAFFIC HTTP TO HTTPS IF SSL IS PRESENT, NO EXCEPTION
juandiegopalomino added a commit to run-x/opta that referenced this issue Feb 25, 2021
Following this solution: kubernetes/ingress-nginx#2724 (comment)

Tested it with both no https and with https.
ALL SERVICES WILL REDIRECT PUBLIC TRAFFIC HTTP TO HTTPS IF SSL IS PRESENT, NO EXCEPTION
@ngocketit
Copy link

@ssh2n Local or Cluster are not matter for ssl-redirection.
If you want all services to have ssl-redirection, you just put this on server-snippet

      listen 8000;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }

But if you prefer to select which services are required ssl-redirection, then you need only

      listen 8000;

And leave the 308 redirection to nginx.ingress.kubernetes.io/server-snippet annotation

controller.config.server-snippet will add config to all nginx server while nginx.ingress.kubernetes.io/server-snippet annotation will add to only annotated server

controller.config.server-snippet is not working with the latest (0.45.0) Helm chart as reported here #6829 and you need to include the snippet in every ingress.

@samrakshak
Copy link

@ngocketit got anything?

@ngocketit
Copy link

@samrakshak As I mentioned, putting the server snippet in Helm didn't work for me so I had to put it in every ingress (with nginx.ingress.kubernetes.io/server-snippet annotation) and it worked for me.

@samrakshak
Copy link

@ngocketit now I am getting the following error:

error:1408F10B:SSL routines:ssl3_get_record:wrong version number

I have used a AWS NLB and the certificate is being issued by certmanager/let's encrypt . I want TLS termination but I think as TLS is not termination I am facing this issue.

@Ariseaz
Copy link

Ariseaz commented Mar 5, 2022

Using helm ingress-nginx chart on EKS
Edit configmap ingress-nginx-controller
kubectl edit configmap ingress-nginx-controller -n ingress-nginx
Add
data: server-snippet: | listen 8000; if ( $server_port = 80 ) { return 308 https://$host$request_uri; } ssl-redirect: "false"

Edit service/ingress-nginx-controller by adding
meta.helm.sh/release-namespace: ingress-nginx service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60" service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <acm arn> service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https service.beta.kubernetes.io/aws-load-balancer-type: nlb

Setup your port in the ingress controller to look like what I have below:
NB: special port is what you are going to add to the ingress containerPort
ports:

  • name: http
    port: 80
    protocol: TCP
    targetPort: 80
  • name: https
    port: 443
    protocol: TCP
    targetPort: special

Now Edit ingress controller deployment containerPort
kubectl edit deployment.apps/ingress-nginx-controller -n ingress-nginx
Add:

  • containerPort: 8000
    name: special
    protocol: TCP

@thakurchander
Copy link

thakurchander commented Mar 10, 2022

@Ariseaz - I applied the suggested workaround and its working. Thank you @Ariseaz

But , Today I installed pgadmin service in my EKS cluster and its redirection to special port 8000. Could you please suggest where I am doing mistake.

Below are the yamls

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
  namespace: tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgadmin
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      initContainers:
        - name: pgadmin-data-permission-fix
          image: busybox
          command: ["/bin/chown", "-R", "5050:5050", "/var/lib/pgadmin"]
          volumeMounts:
          - name: pgadminstorage
            mountPath: /var/lib/pgadmin
      containers:
      - name: pgadmin
        image: dpage/pgadmin4
        imagePullPolicy: Always
        volumeMounts:
        - mountPath: /var/lib/pgadmin
          name: pgadminstorage
        ports:
        - name: pgadmin
          containerPort: 5050
          protocol: TCP
        env:
          - name: PGADMIN_LISTEN_PORT
            value: "5050"
          - name: PGADMIN_DEFAULT_EMAIL
            value: admin
          - name: PGADMIN_DEFAULT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: pgadmin
                key: pgadmin-password
      volumes:
      - name: pgadminstorage
        persistentVolumeClaim:
          claimName: pgadminstorage

service.yaml

---
apiVersion: v1
kind: Service
metadata:
  namespace: tools
  name: pgadmin
spec:
  type: ClusterIP
  ports:
  - name: pgadmin
    port: 5050
  selector:
    app: pgadmin

resource ingress yaml

          
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pgadmin
  namespace: tools
  annotations:
    external-dns.alpha.kubernetes.io/ttl: "60"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.org/location-snippets: |
      proxy_set_header HOST $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_http_version 1.1;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_pass http://pgadmin.abc.abc.com:5050;
      proxy_read_timeout 200s;
spec:
  ingressClassName: external-nginx
  rules:
  - host: pgadmin.abc.abc.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: pgadmin
            port:
              number: 5050

@vmpowercli
Copy link

vmpowercli commented Jun 3, 2022

Followed @Ariseaz suggestion and I was able to redirect to https but it did not work when I enable Proxy protocol v2 on the NLB for forwarding the client real ip.

I was able to get it it fixed both https redirct and client ip by following this page
https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml

Change this in service

`spec:
externalTrafficPolicy: Local
ports:

  • appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: tohttps
  • appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: http`

and this in deployment

`ports:

  • containerPort: 80
    name: http
    protocol: TCP
  • containerPort: 80
    name: https
    protocol: TCP
  • containerPort: 2443
    name: tohttps
    protocol: TCP
  • containerPort: 8443
    name: webhook
    protocol: TCP`

Hope this helps for anyone looking for similar solution

@StepanKuksenko
Copy link

@KongZ
could you explain please why we can't use just this ?

if ( $server_port = 80 ) {
   return 308 https://$host$request_uri;
}

why we even need 8000 port ?

also why we need this

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"

instead of

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

@KongZ
Copy link

KongZ commented Nov 28, 2022

@StepanKuksenko I'm no longer using nginx controllers for years. Hope this description is still valid.

 why we even need 8000 port ?

Because we need 80 to handle HTTP and response 308.
And 8000 is needed to handle HTTPS.
The 443 cannot be used because it is used by nginx to terminate TLS. This case we want to terminate TLS on NLB.

             ┌─────┐           ┌─────────┐        ┌───────┐
          ┌──┴─┐   │       ┌───┴┐        │    ┌───┴─┐     │
───http──▶│:80 │───┼─http─▶│:80 │────────┼───▶│ :80 │     │
          └──┬─┘   │       └───┬┘        │    └───┬─┘     │
             │     │           │         │    ┌───┴─┐     │
             │ NLB │           │ Service │    │:443 │Pod  │
             │     │           │         │    └───┬─┘     │
          ┌──┴─┐   │       ┌───┴┐        │    ┌───┴─┐     │
───https─▶│:443│───┼─http─▶│:443│────────┼───▶│:8000│     │
          └──┬─┘   │       └───┬┘        │    └───┬─┘     │
             └─────┘           └─────────┘        └───────┘

also why we need this "tcp"

Because it is a spec. It accepts only "ssl" or "tcp"

cahillsf added a commit to cahillsf/personal_site that referenced this issue Mar 31, 2023
* adding instrumentation to docker compose

* updating notes

* enabling instrumentation, adding logging + k8s con

* switching to debug level logging in flask

* adding nginx.conf in image buil

* updating conf to include proxy_pass to flask

* updating expected envvars for RUM init

* enabling rum + calling envvars

* adding base path /api k8s deploy for axios call

* removing hashbang from base url of served vue app

* adding k8s-config deployment files

* adding dockerfile for mongodb container

* removing carryover unused components

* updating notes

* dynamicallu assigning api call routes

* updating backend routes to include /api

* adding dynamic app build env var for vue

* updating styles and correcting logo click behavior

* removing unused component from router

* calling api routes dynamically

* updating notes

* updating nginx server conf in case of page refresh

* adding notes for finding custom theming

* shell script for applying all yaml files in dir

* always pull image + adding envvars to vue

* adding ingress resource

* updating api version for ingress resource

* updating namspace to default

* adding helm values

* updating clustername

* updating clustername

* updating target type for ingress

* updating ingress annotations

* updating namespace

* updating namespace

* updating namespace for ingress to default

* adding elb

* adding ingress controller

* updating type of vue service to LoadBalancer

* updating yaml files

* adding kubeadm init config

* updating init-config

* updating init-config

* updating init file

* enable apm in values.yaml

* updating for dev

* updating notes

* adding recapchta server validation

* updating flask reqs for recaptcha

* adding recaptcha and datadog-ci

* hide recaptcha badge

* adding recaptcha and email validation

* adding favicon

* adding sh script for building/pushing imgs

* adding debounce for window resize event listener

* adding link to faviconn

* adding debounce dep

* removing unused dependency

* adding docker updates

* update flask to read from config

* add config.py to container deploy

* add dev containerized build for vue

* update host for dev build to 0.0.0.0

* update vales for dd-helm to pull latest agent

* adding default nginx config /etc/nginx

* updateing vuenotes

* move docker_push

* adding nginx w/apm dockerfile

* adding trace propogation to nginx

* update main nginx.conf- /etc/nginx

* adding apm specific docker entrypoint

* adding shell script to generate the ddtrace config

* update values to include commented config

* adding envvars to flask deploy + logs source

* adding ust + apm to vue deploy

* adding source annotations to mongo deploy

* removing unnecessary NodePort type from svcs

* adding more notes

* adding headless option to flask config

* adding hl ss and svc

* updating to prevent conflict

* adding tls encryption

* adding host to rule

* udating targetPort for https to nlb to 80

* updating app protocol to http

* updating ingress service to use ssl

* adding ssl redirect annotations

* removing ssl redirect at ingress level

* removing tls at ingress resource

* removing tls updates

* adding default ssl cert to controller deployment

* adding tls options back to ingress

* adding ingressclassname

* kubernetes/ingress-nginx#2724 (comment)

* small updates

* removing unnecessary tls at Ingress

* cleaning up ingress

* removing default ssl opt

* cleaning up nginx-elb

* updating notes

* updating mongo deploy to always pull image

* statefulset image config for mongo

* adding updates to ingress for aws deploy on dev

* removing duplicate init files

* adding notes

* adding multiConnect config to flask config file

* adding working statefulset image

* adding authorization to mongo statefulset

* adding updated mongo init file and script for RS

* adding dd annotations to mongodb

* improve css styling

* improve hamburger animation

* add playfair font as default

* update year in bottombar

* update .ignore to add DS_STORE

* update values for kubeadm

* add ingress controller tracing

* update service name of ingress

* adding clusterconf

* update bind addr for contollermgr and scheduler

* update ing contr servicename for tracing

* add pv for mongo ss

* updating mongo init for envvar ref

* udpating notes

* update log format for trace id injection

* update rum init config in vue app

* add testing values.yaml for helm deployment

* update img of vue deploy to latest

* delete unused conf file

* fix merge conflict

* update flask image to latest

* add env to nginx ingress tracing config

* add modsecurity configmap

* update nginx ingr controller to enable modsecurity

* add to-do in notes

* fixing vuenotes

* upgrading datadog-ci

* update ns for modsec configmap

* add dd-env to nginx-elb

* enable logging for debugging modsec

* update flask

* enable rule processing

* update notes

* clean up logging in flask

* update dockerpush to accept alt dockerfile path

* update aboutme description

* fix docker push

* update ps-vue version
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet