Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ssl-passthrough annotation not affecting routes #6722

Closed
eg7eg7 opened this issue Jan 5, 2021 · 17 comments
Closed

ssl-passthrough annotation not affecting routes #6722

eg7eg7 opened this issue Jan 5, 2021 · 17 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@eg7eg7
Copy link

eg7eg7 commented Jan 5, 2021

Hi, I'm currently moving my application into kubernetes using Helm, and ingress-nginx chart version 3.18.0 as my controller.
This helm chart's values for the controller image are (didn't change it):

controller:
  name: controller
  image:
    repository: k8s.gcr.io/ingress-nginx/controller
    tag: "v0.42.0"
    digest: sha256:f7187418c647af4a0039938b0ab36c2322ac3662d16be69f9cc178bfd25f7eee
    pullPolicy: IfNotPresent
    # www-data -> uid 101
    runAsUser: 101
    allowPrivilegeEscalation: true

accessing the service via NodePort works perfectly, but when using Ingress the TLS is not recognized, so I need the TLS to be terminated at the application, and my trusted cert to be passed to the application as well.

I added the flag --enable-ssl-passthrough to the controller to enable it but it still doesn't work

These are my configurations (after helm generation)

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dorix-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
  rules:
    - host: admin.d.co.il
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: admin
              port:
                number: 443

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: admin
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    app: admin
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 3000
  type: NodePort
  # type: ClusterIP change to ClusterIP in prod

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: admin-deployment
  labels:
    app: admin
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app: admin
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app: admin
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      containers:
      - name: admin-container
        image: "d/admin:0.0.1"
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          name: app-port

generated nginx.conf
nginx.conf (PasteBin)

In this file it shows that enabling SSL passthrough worked - is_ssl_passthrough_enabled = true

However, I am also using the krew plugin for ingress-nginx for debugging
and running the following command yields:
$ kubectl ingress-nginx backends

[
  {
    "name": "default-admin-443",
    "service": {
      "metadata": {
        "creationTimestamp": null
      },
      "spec": {
        "ports": [
          {
            "name": "http",
            "protocol": "TCP",
            "port": 443,
            "targetPort": 3000,
            "nodePort": 32344
          }
        ],
        "selector": {
          "app": "admin",
          "app.kubernetes.io/name": "ingress-nginx",
          "app.kubernetes.io/part-of": "ingress-nginx"
        },
        "clusterIP": "10.97.87.115",
        "type": "NodePort",
        "sessionAffinity": "None",
        "externalTrafficPolicy": "Cluster"
      },
      "status": {
        "loadBalancer": {}
      }
    },
    "port": 443,
    "sslPassthrough": false,
    "endpoints": [
      {
        "address": "172.17.0.17",
        "port": "3000"
      }
    ],
    "sessionAffinityConfig": {
      "name": "",
      "mode": "",
      "cookieSessionAffinity": {
        "name": ""
      }
    },
    "upstreamHashByConfig": {
      "upstream-hash-by-subset-size": 3
    },
    "noServer": false,
    "trafficShapingPolicy": {
      "weight": 0,
      "header": "",
      "headerValue": "",
      "headerPattern": "",
      "cookie": ""
    }
  },
  {
    "name": "upstream-default-backend",
    "port": 0,
    "sslPassthrough": false,
    "endpoints": [
      {
        "address": "127.0.0.1",
        "port": "8181"
      }
    ],
    "sessionAffinityConfig": {
      "name": "",
      "mode": "",
      "cookieSessionAffinity": {
        "name": ""
      }
    },
    "upstreamHashByConfig": {},
    "noServer": false,
    "trafficShapingPolicy": {
      "weight": 0,
      "header": "",
      "headerValue": "",
      "headerPattern": "",
      "cookie": ""
    }
  }
]

It says that "sslPassthrough": false even though I used the passthrough annotation in the ingress
Is there something that I am missing or is this a bug?

Thanks a lot, Eden

@eg7eg7 eg7eg7 added the kind/support Categorizes issue or PR as a support question. label Jan 5, 2021
@aledbf
Copy link
Member

aledbf commented Jan 5, 2021

@eg7eg7 I cannot reproduce this issue

Create a Kubernetes cluster using kind and install ingress-nginx: https://kind.sigs.k8s.io/docs/user/ingress/#ingress-nginx

Patch ingress-nginx deployment to add ssl-passthrough flag:

kubectl patch deployment \
  ingress-nginx-controller \
  --namespace ingress-nginx \
  --type='json' \
  -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-ssl-passthrough"}]'

Create ingress, service, and deployment:

echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dorix-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
spec:
  rules:
    - host: admin.d.co.il
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: admin
              port:
                number: 443
---
apiVersion: v1
kind: Service
metadata:
  name: admin
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    app: admin
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 3000
  type: NodePort
  # type: ClusterIP change to ClusterIP in prod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: admin-deployment
  labels:
    app: admin
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  selector:
    matchLabels:
      app: admin
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app: admin
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      containers:
      - name: admin-container
        image: "d/admin:0.0.1"
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          name: app-port
" | kubectl apply -f -

Check ssl-passthrough is enabled:

kubectl exec -n ingress-nginx ingress-nginx-controller-bf59cd-dfjqk -- cat nginx.conf | grep is_ssl
	is_ssl_passthrough_enabled = true,

Using helm:

helm install nginx ingress-nginx/ingress-nginx \
  --set "controller.extraArgs.enable-ssl-passthrough=true"
....

kubectl exec nginx-ingress-nginx-controller-696fbfb447-vrnlw -- cat nginx.conf|grep is_ssl
	is_ssl_passthrough_enabled = true,

@eg7eg7
Copy link
Author

eg7eg7 commented Jan 5, 2021

I just ran my cluster on kind as you mentioned, started the cluster with Helm but with a different name, which seems to be necassary in order for the krew ingress-nginx plugin to work

helm install ingress-nginx ingress-nginx/ingress-nginx --set "controller.extraArgs.enable-ssl-passthrough=true"

changed nginx to ingress-nginx
I am also getting

kubectl exec nginx-ingress-nginx-controller-696fbfb447-vrnlw -- cat nginx.conf|grep is_ssl
	is_ssl_passthrough_enabled = true,

However, when running:

kubectl ingress-nginx backends | grep sshPassthrough
   "sslPassthrough": false 

sshPassthrough is still false

I just found a user who had a very similar problem to mine in stackoverflow
https://stackoverflow.com/questions/59878060/ssl-passthrough-not-being-configured-for-ingress-nginx-backend
but he had the ingress misconfigured, I begin to wonder if that is the case for me but I don't see anything out of the ordinary

@aledbf
Copy link
Member

aledbf commented Jan 5, 2021

@eg7eg7 this could be an issue with the krew plugin. Please run
kubectl exec nginx-ingress-nginx-controller-696fbfb447-vrnlw -- cat nginx.conf | grep 442

If that returns something like

		listen_ports = { ssl_proxy = "442", https = "443" },
		listen 442 proxy_protocol default_server reuseport backlog=4096 ssl http2 ;
		listen [::]:442 proxy_protocol default_server reuseport backlog=4096 ssl http2 ;
		listen 442 proxy_protocol  ssl http2 ;
		listen [::]:442 proxy_protocol  ssl http2 ;

ssl-passthrough is enabled and configured.

@eg7eg7
Copy link
Author

eg7eg7 commented Jan 5, 2021

it seems to be configured exactly like you wrote.
but I get the same output even without the nginx.ingress.kubernetes.io/ssl-passthrough: "true" annotation
Is there something else that I need to configure for the ingress to behave as if I am accessing the service directly with a NodePort?

@aledbf
Copy link
Member

aledbf commented Jan 5, 2021

Is there something else that I need to configure for the ingress to behave as if I am accessing the service directly with a NodePort?

@eg7eg7 not sure I understand what you mean by that. Using curl https://admin.d.co.il should work.
Did you update the dns records? How are you testing this? What is the output of the ingress-nginx logs?

@eg7eg7
Copy link
Author

eg7eg7 commented Jan 5, 2021

sorry for the delay in my response, I had done my tests on my previous cluster (where I used minikube) - because I didn't manage to get the ingress IP with kind to put in my hosts file

I loaded the controller and the app step by step on a fresh minikube cluster, I enabled ingress addons (minikube addons enable ingress), and the response for
kubectl exec nginx-ingress-nginx-controller-696fbfb447-vrnlw -- cat nginx.conf | grep 442
does not return the expected output

my current /etc/hosts contains 192.168.49.2 admin.d.co.il, the ip belongs to the minikube cluster

these are the logs:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v0.43.0
  Build:         f3f6da12ac7c59b85ae7132f321bc3bcf144af04
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.6

-------------------------------------------------------------------------------

I0105 15:14:07.311843       6 flags.go:206] "Watching for Ingress" class="nginx"
W0105 15:14:07.311904       6 flags.go:211] Ingresses with an empty class will also be processed by this Ingress controller
W0105 15:14:07.312209       6 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0105 15:14:07.312374       6 main.go:241] "Creating API client" host="https://10.96.0.1:443"
I0105 15:14:07.316809       6 main.go:285] "Running in Kubernetes cluster" major="1" minor="19" git="v1.19.4" state="clean" commit="d360454c9bcd1634cf4cc52d1867af5491dc9c5f" platform="linux/amd64"
I0105 15:14:07.533710       6 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0105 15:14:07.534327       6 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
W0105 15:14:07.535433       6 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used.
I0105 15:14:07.543996       6 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0105 15:14:07.558527       6 nginx.go:254] "Starting NGINX Ingress controller"
I0105 15:14:07.561240       6 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"ingress-nginx-controller", UID:"2034fa33-9547-48f2-bd5b-9f29674e983f", APIVersion:"v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/ingress-nginx-controller
I0105 15:14:08.758955       6 nginx.go:740] "Starting TLS proxy for SSL Passthrough"
I0105 15:14:08.759032       6 leaderelection.go:243] attempting to acquire leader lease default/ingress-controller-leader-nginx...
I0105 15:14:08.759032       6 nginx.go:296] "Starting NGINX process"
I0105 15:14:08.759303       6 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0105 15:14:08.759407       6 controller.go:144] "Configuration changes detected, backend reload required"
I0105 15:14:08.763802       6 leaderelection.go:253] successfully acquired lease default/ingress-controller-leader-nginx
I0105 15:14:08.763889       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-784c7cb596-5zjh2"
I0105 15:14:08.810560       6 controller.go:161] "Backend successfully reloaded"
I0105 15:14:08.810625       6 controller.go:172] "Initial sync, sleeping for 1 second"
I0105 15:14:08.810701       6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"ingress-nginx-controller-784c7cb596-5zjh2", UID:"b9ddbb99-34a6-41a8-bdb5-b9efb3797ffd", APIVersion:"v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0105 15:16:10.793496       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:16:10.830304       6 main.go:112] "successfully validated configuration, accepting" ingress="d-ingress/default"
I0105 15:16:10.833994       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:16:13.928715       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:16:13.928770       6 controller.go:144] "Configuration changes detected, backend reload required"
I0105 15:16:13.984849       6 controller.go:161] "Backend successfully reloaded"
I0105 15:16:13.985062       6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"ingress-nginx-controller-784c7cb596-5zjh2", UID:"b9ddbb99-34a6-41a8-bdb5-b9efb3797ffd", APIVersion:"v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0105 15:16:19.155158       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1107", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:16:19.155332       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:17:08.766592       6 status.go:281] "updating Ingress status" namespace="default" ingress="d-ingress" currentValue=[{IP:192.168.49.2 Hostname: Ports:[]}] newValue=[]
I0105 15:17:08.769606       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1158", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:17:08.769709       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:17:19.153954       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1171", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:17:19.154065       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:18:08.766691       6 status.go:281] "updating Ingress status" namespace="default" ingress="d-ingress" currentValue=[{IP:192.168.49.2 Hostname: Ports:[]}] newValue=[]
I0105 15:18:08.769306       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1222", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:18:08.769432       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
W0105 15:18:12.171572       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
W0105 15:18:15.504993       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
W0105 15:18:18.838386       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:18:19.152746       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1287", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:18:22.171634       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:19:08.767154       6 status.go:281] "updating Ingress status" namespace="default" ingress="d-ingress" currentValue=[{IP:192.168.49.2 Hostname: Ports:[]}] newValue=[]
I0105 15:19:08.769845       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1343", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:19:08.770031       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:19:19.154379       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1356", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:19:19.154850       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:20:08.766648       6 status.go:281] "updating Ingress status" namespace="default" ingress="d-ingress" currentValue=[{IP:192.168.49.2 Hostname: Ports:[]}] newValue=[]
I0105 15:20:08.769122       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1409", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:20:08.769253       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:20:19.156266       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:20:19.156316       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:21:08.767010       6 status.go:281] "updating Ingress status" namespace="default" ingress="d-ingress" currentValue=[{IP:192.168.49.2 Hostname: Ports:[]}] newValue=[]
I0105 15:21:08.769737       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1473", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:21:08.769835       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:21:19.155734       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1486", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W0105 15:21:19.155865       6 controller.go:966] Service "default/admin" does not have any active Endpoint.
I0105 15:22:08.769944       6 status.go:281] "updating Ingress status" namespace="default" ingress="d-ingress" currentValue=[{IP:192.168.49.2 Hostname: Ports:[]}] newValue=[]
I0105 15:22:08.772692       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1546", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0105 15:22:19.152883       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"d-ingress", UID:"2134dc90-163c-4467-b4f2-c2f7b12e0d60", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

What I mean by accessing the app directly with the NodePort, when I put the NodePort IP:port in the browser, it requests a trusted ca which is installed in my browser, and communication is encrypted with SSL from the server app. But when accessing through the ingress, the ca is not requested like it would in the NodePort

Maybe this will show you what I mean:
NodePort:
A prompt is showing for the certificate (requested by the app)
image

via ingress it leads me to the app without the prompt, and it is unsecured, I thought SSL passthrough would be the solution

@aledbf
Copy link
Member

aledbf commented Jan 5, 2021

via ingress it leads me to the app without the prompt, and it is unsecured, I thought SSL passthrough would be the solution

Does it seem the application only has one port?

port: 443
targetPort: 3000

please post the output of kubectl get service admin and kubectl get ep admin

@eg7eg7
Copy link
Author

eg7eg7 commented Jan 5, 2021

$ kubectl get service admin

NAME    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
admin   NodePort   10.106.165.120   <none>        443:31693/TCP   20m

$ kubectl get ep admin

NAME    ENDPOINTS         AGE
admin   172.17.0.7:3000   21m

Yes, the app is listening only in 3000 within the container

@aledbf
Copy link
Member

aledbf commented Jan 5, 2021

admin NodePort 10.106.165.120 443:31693/TCP 20m

The screenshot you posted uses the port 31829. Is not the same service.

@eg7eg7
Copy link
Author

eg7eg7 commented Jan 5, 2021

admin NodePort 10.106.165.120 443:31693/TCP 20m

The screenshot you posted uses the port 31829. Is not the same service.

Yes, my pc crashed just before my post which may explain it

@eg7eg7
Copy link
Author

eg7eg7 commented Jan 7, 2021

So I tried moving my cluster to aws (kubernetes version 1.18) and the ssl passthrough worked!
I suspect that the problem has to do with minikube because the output for
kubectl exec nginx-ingress-nginx-controller-696fbfb447-vrnlw -- cat nginx.conf | grep 442
was right, and for minikube it wasn't (just that I didn't manage to actually test it on kind)

the minikube version I was using where it didn't work was v1.15.1 with kubernetes v1.19.4, another user on StackOverflow tried to replicate my issue and confirmed it didn't work for him as well.

Thanks a lot for your help @aledbf !

@kennyfortune
Copy link

I got an same issue ! Is there any solution on minikube ?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 28, 2021
@k8s-triage-robot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@qingvincentyin
Copy link

@eg7eg7 this could be an issue with the krew plugin. Please run kubectl exec nginx-ingress-nginx-controller-696fbfb447-vrnlw -- cat nginx.conf | grep 442

If that returns something like

		listen_ports = { ssl_proxy = "442", https = "443" },
		listen 442 proxy_protocol default_server reuseport backlog=4096 ssl http2 ;
		listen [::]:442 proxy_protocol default_server reuseport backlog=4096 ssl http2 ;
		listen 442 proxy_protocol  ssl http2 ;
		listen [::]:442 proxy_protocol  ssl http2 ;

ssl-passthrough is enabled and configured.

My observation is consistent with your speculation that it's probably a bug in the krew plugin ingress-nginx.
So, the following output is unreliable (probably a bug in the plugin):

$ kubectl ingress-nginx backends --deployment ingress-nginx-private-controller -n ingress-nginx-private | grep sslPassthrough
    "sslPassthrough": false,
    "sslPassthrough": false,
    "sslPassthrough": false,

But the real nginx.conf is good and the whole system is working for me:

$ kubectl exec deployment/ingress-nginx-private-controller -n ingress-nginx-private -- cat nginx.conf | grep 442
			listen_ports = { ssl_proxy = "442", https = "443" },
		listen 442 proxy_protocol default_server reuseport backlog=4096 ssl http2 ;
		listen 442 proxy_protocol  ssl http2 ;
		listen 442 proxy_protocol  ssl http2 ;
		listen 442 proxy_protocol  ssl http2 ;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants