Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable proxy protocol for tcp service #3984

Closed
nmiculinic opened this issue Apr 9, 2019 · 28 comments
Closed

Disable proxy protocol for tcp service #3984

nmiculinic opened this issue Apr 9, 2019 · 28 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nmiculinic
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Not really sure, per documentation, it should be possible, but by following it I cannot make it work. Thus it's a BUG REPORT if it's not my mistake, and FEATURE REQUEST if the required feature is not implemented.

NGINX Ingress controller version:
0.22.0

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Ubuntu 18.04
  • Kernel (e.g. uname -a):
  • Install tools: kubespray
  • Others:

What happened:

I've installed nginx ingress controller via stable helm chart, version 1.3.1. I want nginx to use proxy_protocol for all http/https ingresses, but not for the SSH service listed in tcp services. Here are my values.yaml:

podSecurityPolicy:
  enabled: true
controller:
  replicaCount: 2
  config:
    use-proxy-protocol: "true"
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    type: LoadBalancer
  ingressClass: "nginx-internal"
  priorityClassName: "k8s-cluster-critical"
  publishService:
    enabled: true
  metrics:
    enabled: true
  stats:
    enabled: true
tcp:
  22: "devtools/gitlab-gitlab-shell:22:PROXY"

What you expected to happen:
That proxy_protocol works correctly for ports 443, and 80, while the proxy protocol isn't used for port 22 (( I've tried without PROXY, I've tried with two :PROXY:PROXY, I've tried with :listen ))

The proxy protocol correctly works for http and https ingresses, however, whenever I try to ssh into TCP service I'm met with:

Bad protocol version identification 'PROXY TCP4 10.89.0.2 10.88.5.97 60630 22' from 10.233.80.7 port 60058

that is, I want nginx to terminate the proxy-protocol and present pure TCP connection to SSH backend without that extra header information. However, from documentation it isn't clear if that's implemented nor how do I do it.

Anything else we need to know:

Related issues: #659

@christhomas
Copy link

This might not work, but it's worth giving you this and seeing whether it works out or not.

Have you tried: devtools/gitlab-gitlab-shell:22::PROXY

You leave the first position empty and put PROXY in the second position. This helped when getting my postfix server to work and that works by a TCP port.

@nmiculinic
Copy link
Author

Yes, that's the one that solved it for this usecase!!!

This is so arcane btw

@nmiculinic
Copy link
Author

or not...I got happy too soon

@christhomas
Copy link

I'm wondering how to try this out quickly on my cluster, I think if I get a sshd image from docker I could build a test case to see whether I can login to the container through the ingress nginx.

I'll be back

@nmiculinic
Copy link
Author

So here is my new config:

podSecurityPolicy:
  enabled: true
controller:
  replicaCount: 2
  config:
    use-proxy-protocol: "true"
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    externalTrafficPolicy: "Local"
    type: LoadBalancer
  ingressClass: "nginx-internal"
  priorityClassName: "k8s-cluster-critical"
  publishService:
    enabled: true
  extraArgs:
    default-ssl-certificate: ingress/wild-fashionnetwork-com
  metrics:
    enabled: true
  stats:
    enabled: true
tcp:
  22: "devtools/gitlab-gitlab-shell:22::PROXY"

@christhomas
Copy link

I just setup a sshd container inside my remote kubernetes cluster. I didn't need to use the proxy protocol. I was just able to add my key to the authorized keys and then login without any issue.

Ingress Nginx TCP Services

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  10222: "default/sshd-experiment:22::PROXY"

then for the app

apiVersion: v1
kind: ConfigMap
metadata:
  name: ssh-authorized-keys
  namespace: default
data:
  authorized_keys: |
       <put your id_rsa.pub here>
---

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: ssh-experiment
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: sshd
    spec:
      containers:
        - name: sshd-experiment
          image: panubo/sshd
          imagePullPolicy: Always
          ports:
            - name: sshd
              containerPort: 22
              protocol: TCP
          env:
            - name: "SSH_ENABLE_ROOT"
              value: "true"
          volumeMounts:
            - name: config
              mountPath: /root/.ssh/authorized_keys
              subPath: authorized_keys
      volumes:
        - name: config
          configMap:
            name: ssh-authorized-keys
---

apiVersion: v1
kind: Service
metadata:
  name: sshd-experiment
  namespace: default
spec:
  selector:
    app: sshd
  ports:
    - name: sshd
      port: 22
      targetPort: 22

Then I get to login like this:

computer $ ssh root@blah -p 10222
The authenticity of host '[blah]:10222 ([blah]:10222)' can't be established.
ECDSA key fingerprint is blah
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[blah]:10222,[blah]:10222' (ECDSA) to the list of known hosts.
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

ssh-experiment-5b9644c648-68ks7:~# exit

Are you needing the proxy protocol for any particular reason? Like you absolutely require to use it? Or were you trying to login to a ssh container and you just thought you needed it?

Cause it appears that I apparently don't need it.

@christhomas
Copy link

I'm looking at your config and wondering what syntax that is. It's not the default nginx ingress controller syntax that I recognise and although you're using aws, I honestly can't think what you're doing. Are you using a special syntax? Did you follow an example that you can show me? So I can understand what you are following?

@nmiculinic
Copy link
Author

It's helm chart syntax: https://github.com/helm/charts/tree/master/stable/nginx-ingress

Here are the resulting configMaps:

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  enable-vts-status: "true"
  use-proxy-protocol: "true"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"enable-vts-status":"true","use-proxy-protocol":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.3.1","component":"controller","heritage":"Tiller","release":"nginx-internal"},"name":"nginx-internal-nginx-ingress-controller","namespace":"ingress"}}
  creationTimestamp: "2019-03-18T11:57:56Z"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.3.1
    component: controller
    heritage: Tiller
    release: nginx-internal
  name: nginx-internal-nginx-ingress-controller
  namespace: ingress
  resourceVersion: "8706450"
  selfLink: /api/v1/namespaces/ingress/configmaps/nginx-internal-nginx-ingress-controller
  uid: 1155d796-4975-11e9-8117-061409733802
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  "22": devtools/gitlab-gitlab-shell:22::PROXY
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"22":"devtools/gitlab-gitlab-shell:22::PROXY"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.3.1","component":"controller","heritage":"Tiller","release":"nginx-internal"},"name":"nginx-internal-nginx-ingress-tcp","namespace":"ingress"}}
  creationTimestamp: "2019-04-08T13:51:08Z"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.3.1
    component: controller
    heritage: Tiller
    release: nginx-internal
  name: nginx-internal-nginx-ingress-tcp
  namespace: ingress
  resourceVersion: "10527905"
  selfLink: /api/v1/namespaces/ingress/configmaps/nginx-internal-nginx-ingress-tcp
  uid: 5c9278fe-5a05-11e9-ac88-061409733802

@christhomas
Copy link

ah ok, well you can see my configuration that is the kubernetes syntax, perhaps you can spot a problem or a difference between them, I don't really use helm for anything other than creating templates from values, then I kubectl apply them directly

@nmiculinic
Copy link
Author

How did you configure load balancer nginx-ingress service and which nginx ingress controller are you using?

@christhomas
Copy link

I'm on a bare metal cluster and I just use a daemonset for the ingress nginx and dns load balancing to spread the requests across the cluster. Works well for me.

@nmiculinic
Copy link
Author

Do you use TCP proxy protocol? each TCP connection starts with line such as:

PROXY TCP4 10.88.5.97 10.233.80.34 38514 22 on the ingress side of things

@christhomas
Copy link

No, as I explained, I didn't need to, with my cluster I was able to enter ssh server without adding it

@nmiculinic
Copy link
Author

Then you don't have the same setup as me. I'm using proxy protocol which prepends line such as this from the AWS load balancer. nginx-ingress terminates the proxy protocol and in my case passes it to the sshd daemon who doesn't know how to handle it.

@nmiculinic
Copy link
Author

@christhomas
Copy link

I know what the proxy protocol is, I gave you the original line on how to enable it for tcp servers. With postfix I need this protocol because it requires the src ip address for filtering out open relay connections according to it's rule sets.

But I don't know why nginx ingress is passing along the proxy protocol from your ELB/ALB if you didn't instruct it to. For me, I have to manually enable it for it to pass that protocol along.

@nmiculinic
Copy link
Author

I'm not sure...

Here's full nginx manifest I'm deploying to the cluster.

nginx-internal.zip

@christhomas
Copy link

hmmm, I don't see anything wrong with this configuration, apart from a small misunderstanding, sshd doesn't understand proxy protocol, so you should remove the::PROXY part for the tcp services, that means it'll use it and sshd won't like that.

I thought you meant that you wanted it, although I've realised over the course of this discussion that you don't want it. What you want, is the proxy protocol which is coming from the ELB/ALB to terminate and pass along just the connection itself.

Perhaps you need to terminal the PP at the ELB/ALB layer for that tcp service before it reaches your cluster?

I also notice you've got use-proxy-protocol enabled and I never used that on my cluster, perhaps you can try removing it and seeing whether it helps?

@nmiculinic
Copy link
Author

I've tried many permutations of enable/disable, and I don't really understand what I'm doing. What I want is proxy protol enabled for 443, 80 so I can filter by IP. For port 22 I don't need to filter by IP as per requirements.

@bradym
Copy link

bradym commented May 7, 2019

Came across this issue in my attempt to get ssh running through ingress-nginx. I'm running k8s in AWS with an ELB in sending traffic to ingress-nginx. I also have proxy protocol enabled as I need it for other apps I have running in my cluster.

For my setup (and the one @nmiculinic posted above in the nginx-internal.zip file) I'm using the service.beta.kubernetes.io/aws-load-balancer-proxy-protocol annotation to enable proxy-protocol.

According to the docs at https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/

Used on the service to enable the proxy protocol on an ELB. Right now we only accept the value * which means enabling the proxy protocol on all ELB backends. In the future we could adjust this to allow setting the proxy protocol only on certain backends.

This means it's not currently possible to use the same load balancer for both proxy-protocol enabled apps and apps without proxy-protocol enabled, and it's not a limitation of ingress-nginx, but rather a limitation of how k8s configures the elbs.

Since I need proxy protocol enabled for existing apps, I'm planning to solve the issue by using a service of type LoadBalancer that does not have proxy-protocol enabled.

Just a note for future searchers who end up on this issue looking for help.

@bekriebel
Copy link

I've seen several recommendations to use ::PROXY for the TCP services. For me, removing one colon worked properly. For example:

22: "gitlab/gitlab-shell:22:PROXY"

worked for exposing port 22 and the proxy information isn't used for the port.

@christhomas
Copy link

does anybody really understand the two PROXY options on the end? I've read the docs and I'm still not sure.

The four combinations are:
22: "gitlab/gitlab-shell:22"
and
22: "gitlab/gitlab-shell:22:PROXY"
and
22: "gitlab/gitlab-shell:22:PROXY:PROXY"
and
22: "gitlab/gitlab-shell:22::PROXY"

I understand the first one, it doesn't do anything except pass along the connection. But the following three, I'm not 100% sure on.

Does anybody feel confident to explain them?

@nmiculinic
Copy link
Author

I don't. And it's true that the documentation is lacking in regard to this.

@gregd72002
Copy link

gregd72002 commented Jun 19, 2019

For explanation look here: https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/controller.go#L302

The syntax is:

<(str)namespace>/<(str)service>:<(intstr)port>[:<("PROXY")decode>:<("PROXY")encode>]

so first PROXY parameter is for decoding (this translates into the following nginx configuration: "proxy_protocol" on the listen directive

the second PROXY is a encoding for upstream

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 19, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 19, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants