Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP->HTTPS redirect does not work with use-proxy-protocol: "true" #808

Closed
jpnauta opened this issue Jun 1, 2017 · 38 comments
Closed

HTTP->HTTPS redirect does not work with use-proxy-protocol: "true" #808

jpnauta opened this issue Jun 1, 2017 · 38 comments

Comments

@jpnauta
Copy link

jpnauta commented Jun 1, 2017

I am currently using gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.7. I was having issues as #277, but that issue is marked as resolved. My ingress would work properly with https://, but would return an empty response with http://. This is what happened when I tried to cURL my domain:

$ curl https://mydomain.com
[html response]
$ curl http://mydomain.com
curl: (52) Empty reply from server

When I changed the use-proxy-protocol configuration from true to false, the curl worked correctly.

$ curl https://mydomain.com
[html response]
$ curl http://mydomain.com
[301 response]

Here is my original config map to reproduce the situation:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config-map
data:
  force-ssl-redirect: "true"
  ssl-redirect: "true"
  use-proxy-protocol: "true"
@aledbf
Copy link
Member

aledbf commented Jun 30, 2017

@jpnauta please check if the latest beta solves the issue (0.9-beta.10)

@aledbf aledbf added this to TODO in nginx 0.9-beta.11 Jun 30, 2017
@acoshift
Copy link
Contributor

acoshift commented Jul 2, 2017

@aledbf I have this problem in beta.10 too.

@acoshift
Copy link
Contributor

acoshift commented Jul 3, 2017

I think It may not the problem here.
I use GCP TCP LB but it won't send PROXY protocol header for http, that why nginx return empty response https://trac.nginx.org/nginx/ticket/1048
My workaround is use custom template with disable proxy_protocolon port 80.
Is it possible to add config to disable proxy_protocol on port 80 ?

@aledbf
Copy link
Member

aledbf commented Jul 3, 2017

@acoshift in the configmap use use-proxy-protocol: "false"

@acoshift
Copy link
Contributor

acoshift commented Jul 3, 2017

@aledbf ok, right now I remove custom template, and set use-proxy-protocol: "false"
with service.beta.kubernetes.io/external-traffic: OnlyLocal
but I got 10.0.x.x ip in nginx logs.
The only way I can get real ip is to set real_ip_header proxy_protocol;

@aledbf
Copy link
Member

aledbf commented Jul 3, 2017

@acoshift that's strange because gcp does not supports proxy protocol for http, only https.

@acoshift
Copy link
Contributor

acoshift commented Jul 3, 2017

Here's all configs.

$ kubectl get -o yaml svc nginx-ingress-2
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/external-traffic: OnlyLocal
    service.beta.kubernetes.io/healthcheck-nodeport: "31976"
  creationTimestamp: 2017-05-28T08:40:25Z
  name: nginx-ingress-2
  namespace: default
  resourceVersion: "9970477"
  selfLink: /api/v1/namespaces/default/services/nginx-ingress-2
  uid: 4b7a0442-4381-11e7-833e-42010a94000a
spec:
  clusterIP: 10.3.255.234
  loadBalancerIP: x.x.x.x
  ports:
  - name: http
    nodePort: 30340
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    nodePort: 31552
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    k8s-app: nginx-ingress-lb
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: x.x.x.x
$ kubectl get -o yaml ing nginx-ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  creationTimestamp: 2017-04-17T16:12:47Z
  generation: 29
  name: nginx-ingress
  namespace: default
  resourceVersion: "10652294"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
  uid: b24bd062-2388-11e7-b9a0-42010a94000b
spec:
  rules:
  - host: x.x
    http:
      paths:
      - backend:
          serviceName: xxx
          servicePort: 8080
        path: /
  tls:
  - hosts:
    - x.x
    secretName: x.x-tls
status:
  loadBalancer:
    ingress:
    - ip: x.x.x.x
$ kubectl get -o yaml ds nginx-ingress-controller-ds
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  creationTimestamp: 2017-07-02T09:42:56Z
  generation: 2
  labels:
    k8s-app: nginx-ingress-lb
  name: nginx-ingress-controller-ds
  namespace: default
  resourceVersion: "10652196"
  selfLink: /apis/extensions/v1beta1/namespaces/default/daemonsets/nginx-ingress-controller-ds
  uid: d3bbeb6b-5f0a-11e7-ad52-42010a94000a
spec:
  selector:
    matchLabels:
      k8s-app: nginx-ingress-lb
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: nginx-ingress-lb
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --configmap=$(POD_NAMESPACE)/nginx-config
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: gcr.io/google-containers/nginx-ingress-controller:0.9.0-beta.10
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: nginx-ingress-controller
        ports:
        - containerPort: 80
          protocol: TCP
        - containerPort: 443
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 60
  templateGeneration: 2
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberAvailable: 3
  numberMisscheduled: 0
  numberReady: 3
  observedGeneration: 2
  updatedNumberScheduled: 3

Some logs in nginx pod

// this is https
2017-07-03T02:48:24.670527712Z 127.0.0.1 - [127.0.0.1] - - [03/Jul/2017:02:48:24 +0000] "GET / HTTP/2.0" 200 4764 "-" "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36" 249 0.010 [default-xxx-8080] 10.0.3.7:8080 19918 0.010 200

// this is http got real ip
2017-07-03T02:50:58.695187914Z 14.207.109.160 - [14.207.x.x] - - [03/Jul/2017:02:50:58 +0000] "GET / HTTP/1.1" 301 178 "-" "curl/7.51.0" 74 0.000 [default-xxx-8080] - - - -

@aledbf
Copy link
Member

aledbf commented Jul 3, 2017

I use GCP TCP LB but it won't send PROXY protocol header for http, that why nginx return empty response h

Please change the gcp lb to http. In that mode the load balancer sends the X-Forwarded-For

@acoshift
Copy link
Contributor

acoshift commented Jul 3, 2017

tyvm for helping me, but for my use-case, I can not use gcp lb http because I want ingress controller to handle TLS (from kube-lego). Right now I have to use custom template for workaround.

My knowledge is very limited but I doubt this line

{{/* Listen on 442 because port 443 is used in the TLS sni server */}}
{{/* This listener must always have proxy_protocol enabled, because the SNI listener forwards on source IP info in it. */}}

"TLS sni server send source IP on port 442", maybe set real_ip_header proxy_protocol; for only port 442 should solve the problem but idk how.

@jpnauta
Copy link
Author

jpnauta commented Jul 4, 2017

@aledbf Unfortunately upgrading to 0.9-beta.10 did not work. However, instead of an empty reply from the server, now I get a 502 error as follows:

<HEAD><TITLE>Server Hangup</TITLE></HEAD>
<BODY BGCOLOR="white" FGCOLOR="black">
<FONT FACE="Helvetica,Arial"><B>

@aledbf
Copy link
Member

aledbf commented Jul 6, 2017

@jpnauta I cannot reproduce this error. Not sure where are you running but this is the full script to provision a cluster in aws.

Create a cluster using kops in us-west

export MASTER_ZONES=us-west-2a
export WORKER_ZONES=us-west-2a,us-west-2b
export KOPS_STATE_STORE=s3://k8s-xxxxxx-01
export AWS_DEFAULT_REGION=us-west-2

kops create cluster \
 --name uswest2-01.xxxxxxx.io \
 --cloud aws \
 --master-zones $MASTER_ZONES \
 --node-count 2 \
 --zones $WORKER_ZONES \
 --master-size m3.medium \
 --node-size m4.large \
 --ssh-public-key ~/.ssh/id_rsa.pub \
 --image coreos.com/CoreOS-stable-1409.5.0-hvm \
 --yes

Create the echoheaders deployment

echo "
apiVersion: v1
kind: Service
metadata:
  name: echoheaders
  labels:
    app: echoheaders
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: echoheaders

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: echoheaders
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echoheaders
    spec:
      containers:
      - name: echoheaders
        image: gcr.io/google_containers/echoserver:1.4
        ports:
        - containerPort: 8080

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echoheaders-nginx
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  tls:
  - hosts:
    - echoheaders.uswest2-01.rocket-science.io
    secretName: echoserver-tls
  rules:
  - host: echoheaders.uswest2-01.xxxxx-xxxx.io
    http:
      paths:
      - backend:
          serviceName: echoheaders
          servicePort: 80
" | kubectl create -f -

Create the nginx ingress controller

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/aws/nginx/nginx-ingress-controller.yaml

Install kube-lego

kubectl create -f https://raw.githubusercontent.com/jetstack/kube-lego/master/examples/nginx/lego/00-namespace.yaml

Configure kube-lego

wget https://raw.githubusercontent.com/jetstack/kube-lego/master/examples/nginx/lego/configmap.yaml
nano configmap.yaml 

Install

kubectl create -f configmap.yaml 
kubectl create -f https://raw.githubusercontent.com/jetstack/kube-lego/master/examples/nginx/lego/deployment.yaml

Run the tests*

$ curl -v echoheaders.uswest2-01.rocket-science.io
* Rebuilt URL to: echoheaders.uswest2-01.rocket-science.io/
*   Trying 52.32.132.20...
* TCP_NODELAY set
* Connected to echoheaders.uswest2-01.rocket-science.io (52.32.132.20) port 80 (#0)
> GET / HTTP/1.1
> Host: echoheaders.uswest2-01.rocket-science.io
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.13.2
< Date: Thu, 06 Jul 2017 01:54:57 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://echoheaders.uswest2-01.rocket-science.io/
< Strict-Transport-Security: max-age=15724800; includeSubDomains;
< 
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.13.2</center>
</body>
</html>
* Curl_http_done: called premature == 0

Delete the cluster

kops delete cluster  --name uswest2-01.xxxxxxxx.io --yes

@aledbf
Copy link
Member

aledbf commented Jul 6, 2017

@jpnauta if you are running in GCE or GKE you cannot enable proxy protocol because it only works with HTTPS

@jpnauta
Copy link
Author

jpnauta commented Jul 6, 2017

Ahhh okay good to know 👍 I'm on GKE, thanks for your help @aledbf

@jpnauta jpnauta closed this as completed Jul 6, 2017
@aledbf aledbf moved this from TODO to done in nginx 0.9-beta.11 Jul 9, 2017
@icereval
Copy link

FYI if you want to configure a load balancer manually (not for the faint of heart). You can workaround this limitation by sharing an External IP between the L7 GLBC Ingress w/ a custom HTTP backend to redirect all traffic to HTTPS, and another manually created L4 LB w/ TCP Proxy Protocol for your HTTPS traffic (to the Nginx Ingress Controller).

@allanharris
Copy link

There is the same issue on Azure (AKS). Redirection doesn't work.

@anurag
Copy link

anurag commented Jan 14, 2018

@aledbf since proxy-protocol doesn't work over HTTP in GKE, is it possible to get the client IP with GCE's TCP load balancer and ssl-passthrough with proxy-protocol disabled?

@aledbf
Copy link
Member

aledbf commented Jan 15, 2018

@anurag not sure. If you want to test this please make sure you use externalTrafficPolicy: Local in the service spec of the ingress controller

@dm3ch
Copy link

dm3ch commented Apr 22, 2018

I've got same problem.
When I'm enabling proxy-protocol I'm getting proxy protocol header error for requests on 80 port.

The problem is that I can't disable proxy protocol because if i'll disable it, I will break client IP detection for https (because I need to use ssl-passthrough for some backends).

The only way I see now is to use haproxy for proxying traffic for 80 port using proxy-protocol.

Maybe we can add two different config annotation to enable proxy protocol listening for 80 and 443 ports separately

@dm3ch
Copy link

dm3ch commented Apr 22, 2018

@aledbf ^

@ghost
Copy link

ghost commented Aug 1, 2018

When running nginx ingress on GKE (with a TCP Load Balancer), the only way to get real client IP is to turn on proxy protocol. However, it will stop http->https redirection. In fact, http requests will end up with empty response on client side, and broken header on nginx side. Confirmed the issue still exists with the latest release 0.17.1.

My solution is:

  1. Set use-proxy-protocol to true
  2. Add another nginx side car with just https redirection rule and listen on a different port, e.g. 8080
  3. Update nginx ingress controller service to map 80 to 8080

Voila.

@artushin
Copy link

artushin commented Aug 1, 2018

@coolersport Is that with the regional or global TCP LB?

@ghost
Copy link

ghost commented Aug 1, 2018

It is regional in my case. However, this solution address it at nginx layer, nothing to do with GCP LB. In fact, LB is auto-provisioned by GKE.

@artushin
Copy link

artushin commented Aug 2, 2018

That's weird, I get real IPs from my regional GCP TCP LB in both x-forwarded-for and x-real-ip, without use-proxy-protocol. You just have to guard it against spoofing with proxy-real-ip-cidr. The global TCP LB doesn't pass through IPs in x-forwarded-for though.

@thomascooper
Copy link

thomascooper commented Sep 10, 2018

I am having this same problem with 0.19

@coolersport I have tried your approach but i believe that it relies on the GCP TCP Proxy which is only available for global, not regional static IP's and Forwarding rules.

Here is an overview of our setup, we have 900+ static IPs for our network, each of these have a manually created regional Forwarding rules (80-443) targeting all required instance groups.

We have 10 nginx-ingress controllers, each with 100+ static IP's configured via ExternalIPs on the service. (This was Google designed and suggested due to a hardcoded limitation of 50 live health checks per cluster)

We use cert-manager (updated version of kube-lego) to automatically provision certs with ingress annotations.

Everything in this scenario works aside from getting the clients actual IP into our app.

If we enable "use-proxy-protocol" in our configMap, then we immediately start getting "broken header:" error messages, I've tried every combination of proxy-real-ip-cidr possible with no results. We Cannot re-provision all 900+ static IPs as global due to multiple issues iincluding quota and the fallout of propagation across all of the domains. Looking for any help we can get at all.

@Spittal
Copy link

Spittal commented Sep 20, 2018

@artushin what version of ingress-nginx are you using where you don't need to use proxy protocol?

@artushin
Copy link

@Spittal I'm on 0.13.0 but, again, that works only on regional TCP LBs. I don't know if it works with proxy protocol on a global LB but it definitely doesn't without it.

@esseti
Copy link

esseti commented Oct 26, 2018

Hi all, even if i turn on the use-proxy-protocol

Name:         nginx-ingress-controller
Namespace:    default
Labels:       app=nginx-ingress
              chart=nginx-ingress-0.29.2
              component=controller
              heritage=Tiller
              release=nginx-ingress
Annotations:  <none>

Data
====
enable-vts-status:
----
false
force-ssl-redirect:
----
false
ssl-redirect:
----
false
use-proxy-protocol:
----
true
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  3m    nginx-ingress-controller  ConfigMap default/nginx-ingress-controller

in the controller i still receive errors and the server does not respond (HTTPS call)

2018/10/26 09:39:10 [error] 91#91: *353 broken header: g9��Z��%�_���9��8��y�;v�D�C��<�n�/�+�0�,��'g�(k�$��
����jih9876�2�.�*�&���=" while reading PROXY protocol, client: 79.0.54.49, server: 0.0.0.0:443

the result for HTTP call has the same error but you seethe content of the requests + headers.

any idea why I can't make it work even with the flag set?

PS: I'm using the helm version

@ghost
Copy link

ghost commented Nov 9, 2018

That's weird, I get real IPs from my regional GCP TCP LB in both x-forwarded-for and x-real-ip, without use-proxy-protocol. You just have to guard it against spoofing with proxy-real-ip-cidr. The global TCP LB doesn't pass through IPs in x-forwarded-for though.

I just noticed that it works in a cluster which uses secure-backend. When setting up a new cluster which has no SSL-passthru, broken header issue reappears.

@roboticsound
Copy link

@coolersport I know this is from a while ago but I am having this exact issue. I am quite new to kubernetes and I wonder if you could clarify how you set up the sidecar?

This has been driving me crazy for 2 weeks now!

@ghost
Copy link

ghost commented Apr 6, 2019

@roboticsound, here they are. Sorry, I can't post full YAML files. Hope this gives you the idea.

--- pod container (sidecar) ---
- name: https-redirector
  image: nginx:1.15-alpine
  imagePullPolicy: IfNotPresent
  ports:
  - containerPort: 8080
    name: redirector
  securityContext:
    allowPrivilegeEscalation: false
  volumeMounts:
  - name: nginx-redirector
    mountPath: /etc/nginx/nginx.conf
    subPath: nginx.conf
    readOnly: true
--- service ---
ports:
- name: http
  port: 80
  targetPort: redirector
--- configmap ---
nginx.conf: |
  events {
      worker_connections  128;
  }
  http {
    server {
      listen 8080;
      server_name _;
      return 301 https://$host$request_uri;
    }
  }

@roboticsound
Copy link

@coolersport Thanks! helped a lot

@dano0b
Copy link

dano0b commented Sep 9, 2019

In case somebody didn't see the better solution: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

@miquelbar
Copy link

miquelbar commented May 6, 2020

@dano0b maybe I'm missing something but I configured kuberntes-ingress in that way and it didn't work: I'm using GKE and when connecting using HTTP I got the real IP but when I'm connecting using HTTPS I'm always getting 127.0.0.1 as the remote IP.

In my opinion, the best solution right now is the one that @coolersport providedÇ

UPDATED After disabled --enable-ssl-passthrough flag I was getting the real request IP as @dano0b pointed

@allenvino1
Copy link

Do we have like a standard way of doing this?

@SoulSkare
Copy link

This is a real headache, I've followed https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip

I've modified our configMap

apiVersion: v1
data:
real-ip-header: X-Forwared-For
real-ip-recursive: "true"
use-forwarded-headers: "true"
use-proxy-protocol: "true"
kind: ConfigMap

Still nothing only showing the local ip.

@ejose19
Copy link

ejose19 commented Mar 9, 2021

For those using helm, here's how I managed to use externalTrafficPolicy: Local (to preserve client ip in backends) while also make it work with multiples nodes behind the LoadBalancer:

helm install ingress ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --set-string controller.service.externalTrafficPolicy=Local \
  --set-string controller.kind=DaemonSet

without controller.kind=DaemonSet, the LoadBalancer was not delivering traffic to the other nodes as they were reporting "unhealthy".

@wernight
Copy link

without controller.kind=DaemonSet, the LoadBalancer was not delivering traffic to the other nodes as they were reporting "unhealthy".

Interestingly it seems to work with controller.kind=Deployment for me. Also it seems that use-proxy-protocol: "true" is also not needed in this Helm scenario.

@slayer
Copy link

slayer commented May 1, 2023

UPDATED After disabled --enable-ssl-passthrough flag I was getting the real request IP as @dano0b pointed

it helps, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests