Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable proxy protocol for tcp service #3984

Open
nmiculinic opened this issue Apr 9, 2019 · 24 comments

Comments

@nmiculinic
Copy link

commented Apr 9, 2019

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Not really sure, per documentation, it should be possible, but by following it I cannot make it work. Thus it's a BUG REPORT if it's not my mistake, and FEATURE REQUEST if the required feature is not implemented.

NGINX Ingress controller version:
0.22.0

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Ubuntu 18.04
  • Kernel (e.g. uname -a):
  • Install tools: kubespray
  • Others:

What happened:

I've installed nginx ingress controller via stable helm chart, version 1.3.1. I want nginx to use proxy_protocol for all http/https ingresses, but not for the SSH service listed in tcp services. Here are my values.yaml:

podSecurityPolicy:
  enabled: true
controller:
  replicaCount: 2
  config:
    use-proxy-protocol: "true"
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    type: LoadBalancer
  ingressClass: "nginx-internal"
  priorityClassName: "k8s-cluster-critical"
  publishService:
    enabled: true
  metrics:
    enabled: true
  stats:
    enabled: true
tcp:
  22: "devtools/gitlab-gitlab-shell:22:PROXY"

What you expected to happen:
That proxy_protocol works correctly for ports 443, and 80, while the proxy protocol isn't used for port 22 (( I've tried without PROXY, I've tried with two :PROXY:PROXY, I've tried with :listen ))

The proxy protocol correctly works for http and https ingresses, however, whenever I try to ssh into TCP service I'm met with:

Bad protocol version identification 'PROXY TCP4 10.89.0.2 10.88.5.97 60630 22' from 10.233.80.7 port 60058

that is, I want nginx to terminate the proxy-protocol and present pure TCP connection to SSH backend without that extra header information. However, from documentation it isn't clear if that's implemented nor how do I do it.

Anything else we need to know:

Related issues: #659

@christhomas

This comment has been minimized.

Copy link

commented Apr 18, 2019

This might not work, but it's worth giving you this and seeing whether it works out or not.

Have you tried: devtools/gitlab-gitlab-shell:22::PROXY

You leave the first position empty and put PROXY in the second position. This helped when getting my postfix server to work and that works by a TCP port.

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

Yes, that's the one that solved it for this usecase!!!

This is so arcane btw

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

or not...I got happy too soon

@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

I'm wondering how to try this out quickly on my cluster, I think if I get a sshd image from docker I could build a test case to see whether I can login to the container through the ingress nginx.

I'll be back

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

So here is my new config:

podSecurityPolicy:
  enabled: true
controller:
  replicaCount: 2
  config:
    use-proxy-protocol: "true"
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    externalTrafficPolicy: "Local"
    type: LoadBalancer
  ingressClass: "nginx-internal"
  priorityClassName: "k8s-cluster-critical"
  publishService:
    enabled: true
  extraArgs:
    default-ssl-certificate: ingress/wild-fashionnetwork-com
  metrics:
    enabled: true
  stats:
    enabled: true
tcp:
  22: "devtools/gitlab-gitlab-shell:22::PROXY"
@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

I just setup a sshd container inside my remote kubernetes cluster. I didn't need to use the proxy protocol. I was just able to add my key to the authorized keys and then login without any issue.

Ingress Nginx TCP Services

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  10222: "default/sshd-experiment:22::PROXY"

then for the app

apiVersion: v1
kind: ConfigMap
metadata:
  name: ssh-authorized-keys
  namespace: default
data:
  authorized_keys: |
       <put your id_rsa.pub here>
---

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ssh-experiment
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: sshd
    spec:
      containers:
        - name: sshd-experiment
          image: panubo/sshd
          imagePullPolicy: Always
          ports:
            - name: sshd
              containerPort: 22
              protocol: TCP
          env:
            - name: "SSH_ENABLE_ROOT"
              value: "true"
          volumeMounts:
            - name: config
              mountPath: /root/.ssh/authorized_keys
              subPath: authorized_keys
      volumes:
        - name: config
          configMap:
            name: ssh-authorized-keys
---

apiVersion: v1
kind: Service
metadata:
  name: sshd-experiment
  namespace: default
spec:
  selector:
    app: sshd
  ports:
    - name: sshd
      port: 22
      targetPort: 22

Then I get to login like this:

computer $ ssh root@blah -p 10222
The authenticity of host '[blah]:10222 ([blah]:10222)' can't be established.
ECDSA key fingerprint is blah
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[blah]:10222,[blah]:10222' (ECDSA) to the list of known hosts.
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

ssh-experiment-5b9644c648-68ks7:~# exit

Are you needing the proxy protocol for any particular reason? Like you absolutely require to use it? Or were you trying to login to a ssh container and you just thought you needed it?

Cause it appears that I apparently don't need it.

@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

I'm looking at your config and wondering what syntax that is. It's not the default nginx ingress controller syntax that I recognise and although you're using aws, I honestly can't think what you're doing. Are you using a special syntax? Did you follow an example that you can show me? So I can understand what you are following?

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

It's helm chart syntax: https://github.com/helm/charts/tree/master/stable/nginx-ingress

Here are the resulting configMaps:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  enable-vts-status: "true"
  use-proxy-protocol: "true"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"enable-vts-status":"true","use-proxy-protocol":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.3.1","component":"controller","heritage":"Tiller","release":"nginx-internal"},"name":"nginx-internal-nginx-ingress-controller","namespace":"ingress"}}
  creationTimestamp: "2019-03-18T11:57:56Z"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.3.1
    component: controller
    heritage: Tiller
    release: nginx-internal
  name: nginx-internal-nginx-ingress-controller
  namespace: ingress
  resourceVersion: "8706450"
  selfLink: /api/v1/namespaces/ingress/configmaps/nginx-internal-nginx-ingress-controller
  uid: 1155d796-4975-11e9-8117-061409733802
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  "22": devtools/gitlab-gitlab-shell:22::PROXY
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"22":"devtools/gitlab-gitlab-shell:22::PROXY"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.3.1","component":"controller","heritage":"Tiller","release":"nginx-internal"},"name":"nginx-internal-nginx-ingress-tcp","namespace":"ingress"}}
  creationTimestamp: "2019-04-08T13:51:08Z"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.3.1
    component: controller
    heritage: Tiller
    release: nginx-internal
  name: nginx-internal-nginx-ingress-tcp
  namespace: ingress
  resourceVersion: "10527905"
  selfLink: /api/v1/namespaces/ingress/configmaps/nginx-internal-nginx-ingress-tcp
  uid: 5c9278fe-5a05-11e9-ac88-061409733802
@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

ah ok, well you can see my configuration that is the kubernetes syntax, perhaps you can spot a problem or a difference between them, I don't really use helm for anything other than creating templates from values, then I kubectl apply them directly

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

How did you configure load balancer nginx-ingress service and which nginx ingress controller are you using?

@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

I'm on a bare metal cluster and I just use a daemonset for the ingress nginx and dns load balancing to spread the requests across the cluster. Works well for me.

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

Do you use TCP proxy protocol? each TCP connection starts with line such as:

PROXY TCP4 10.88.5.97 10.233.80.34 38514 22 on the ingress side of things

@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

No, as I explained, I didn't need to, with my cluster I was able to enter ssh server without adding it

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

Then you don't have the same setup as me. I'm using proxy protocol which prepends line such as this from the AWS load balancer. nginx-ingress terminates the proxy protocol and in my case passes it to the sshd daemon who doesn't know how to handle it.

@nmiculinic

This comment has been minimized.

@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

I know what the proxy protocol is, I gave you the original line on how to enable it for tcp servers. With postfix I need this protocol because it requires the src ip address for filtering out open relay connections according to it's rule sets.

But I don't know why nginx ingress is passing along the proxy protocol from your ELB/ALB if you didn't instruct it to. For me, I have to manually enable it for it to pass that protocol along.

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

I'm not sure...

Here's full nginx manifest I'm deploying to the cluster.

nginx-internal.zip

@christhomas

This comment has been minimized.

Copy link

commented Apr 19, 2019

hmmm, I don't see anything wrong with this configuration, apart from a small misunderstanding, sshd doesn't understand proxy protocol, so you should remove the::PROXY part for the tcp services, that means it'll use it and sshd won't like that.

I thought you meant that you wanted it, although I've realised over the course of this discussion that you don't want it. What you want, is the proxy protocol which is coming from the ELB/ALB to terminate and pass along just the connection itself.

Perhaps you need to terminal the PP at the ELB/ALB layer for that tcp service before it reaches your cluster?

I also notice you've got use-proxy-protocol enabled and I never used that on my cluster, perhaps you can try removing it and seeing whether it helps?

@nmiculinic

This comment has been minimized.

Copy link
Author

commented Apr 19, 2019

I've tried many permutations of enable/disable, and I don't really understand what I'm doing. What I want is proxy protol enabled for 443, 80 so I can filter by IP. For port 22 I don't need to filter by IP as per requirements.

@bradym

This comment has been minimized.

Copy link

commented May 7, 2019

Came across this issue in my attempt to get ssh running through ingress-nginx. I'm running k8s in AWS with an ELB in sending traffic to ingress-nginx. I also have proxy protocol enabled as I need it for other apps I have running in my cluster.

For my setup (and the one @nmiculinic posted above in the nginx-internal.zip file) I'm using the service.beta.kubernetes.io/aws-load-balancer-proxy-protocol annotation to enable proxy-protocol.

According to the docs at https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/

Used on the service to enable the proxy protocol on an ELB. Right now we only accept the value * which means enabling the proxy protocol on all ELB backends. In the future we could adjust this to allow setting the proxy protocol only on certain backends.

This means it's not currently possible to use the same load balancer for both proxy-protocol enabled apps and apps without proxy-protocol enabled, and it's not a limitation of ingress-nginx, but rather a limitation of how k8s configures the elbs.

Since I need proxy protocol enabled for existing apps, I'm planning to solve the issue by using a service of type LoadBalancer that does not have proxy-protocol enabled.

Just a note for future searchers who end up on this issue looking for help.

@bekriebel

This comment has been minimized.

Copy link

commented May 7, 2019

I've seen several recommendations to use ::PROXY for the TCP services. For me, removing one colon worked properly. For example:

22: "gitlab/gitlab-shell:22:PROXY"

worked for exposing port 22 and the proxy information isn't used for the port.

@christhomas

This comment has been minimized.

Copy link

commented May 12, 2019

does anybody really understand the two PROXY options on the end? I've read the docs and I'm still not sure.

The four combinations are:
22: "gitlab/gitlab-shell:22"
and
22: "gitlab/gitlab-shell:22:PROXY"
and
22: "gitlab/gitlab-shell:22:PROXY:PROXY"
and
22: "gitlab/gitlab-shell:22::PROXY"

I understand the first one, it doesn't do anything except pass along the connection. But the following three, I'm not 100% sure on.

Does anybody feel confident to explain them?

@nmiculinic

This comment has been minimized.

Copy link
Author

commented May 14, 2019

I don't. And it's true that the documentation is lacking in regard to this.

@gregd72002

This comment has been minimized.

Copy link

commented Jun 19, 2019

For explanation look here: https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/controller.go#L302

The syntax is:

<(str)namespace>/<(str)service>:<(intstr)port>[:<("PROXY")decode>:<("PROXY")encode>]

so first PROXY parameter is for decoding (this translates into the following nginx configuration: "proxy_protocol" on the listen directive

the second PROXY is a encoding for upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.