Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling enable-ssl-passthrough breaks client IP (all clients are 127.0.0.1) #8052

Closed
logan2211 opened this issue Dec 19, 2021 · 14 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@logan2211
Copy link

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):


NGINX Ingress controller
Release: v1.1.0
Build: cacbee8
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9


Kubernetes version (use kubectl version):

Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2+k3s2", GitCommit:"3f5774b41eb475eb10c93bb0ce58459a6f777c5f", GitTreeState:"clean", BuildDate:"2021-10-05T20:29:33Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
    Limestone Networks Bare Metal Cloud

  • OS (e.g. from /etc/os-release):
    Ubuntu 20.04

  • Kernel (e.g. uname -a):
    5.4.0-90-generic #101-Ubuntu

  • Install tools:

    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
      k3s
  • How was the ingress-nginx-controller installed:

    • If helm was used then please show output of helm ls -A | grep -i ingress
      ingress-nginx-ingress-nginx-private ingress-nginx 4 2021-12-19 05:48:37.258964116 +0000 UTC deployed ingress-nginx-4.0.13 1.1.0

    • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>

USER-SUPPLIED VALUES:
controller:
 electionID: ingress-controller-private-leader
 extraArgs:
   enable-ssl-passthrough: ""
 ingressClassResource:
   controllerValue: k8s.io/ingress-nginx-private
   default: true
   name: nginx-private
 replicaCount: 2
 service:
   annotations:
     purelb.io/service-group: private-pool
   externalTrafficPolicy: Local

What happened:
Prior to enabling "enable-ssl-passthrough", access IPs are reflected correctly in the ingress controller logs, and features like whitelist-source-range work as expected. After enabling enable-ssl-passthrough, client IPs are always shown as 127.0.0.1 in the nginx logs and ingresses using whitelist-source-range stop working as expected. Note that this occurs on all ingresses regardless of whether SSL passthrough is enabled on the ingress. To reproduce, simply enable the SSL passthrough feature on the controller.

before:
ingress-nginx-ingress-nginx-private-controller-698f69fd7f-ts5rc controller 10.3.0.66 - - [19/Dec/2021:17:26:20 +0000] "GET / HTTP/2.0" 200 786 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 22 0.001 [kube-system-kube-system-kubernetes-dashboard-443] [] 192.168.73.41:8443 786 0.000 200 4d19abd96b2a62d04d74906ecd1ccdac

after:
ingress-nginx-ingress-nginx-private-controller-6f945d6d84-4j9fw controller 2021/12/19 17:23:10 [error] 576#576: *18182 access forbidden by rule, client: 127.0.0.1, server: dashboard.k8s.domain.net, request: "GET / HTTP/2.0", host: "dashboard.k8s.domain.net"

How to reproduce it:
Add --enable-ssl-passthrough to the controller command line and then all client IPs will show up as 127.0.0.1 in nginx.

@logan2211 logan2211 added the kind/bug Categorizes issue or PR as related to a bug. label Dec 19, 2021
@k8s-ci-robot
Copy link
Contributor

@logan2211: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Dec 19, 2021
@longwuyuan
Copy link
Contributor

/remove-kind bug
/kind support

Try this documentation https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough

Then please provide all related information from the live state of the cluster like curl request in full, ingress describe, service describe, controller pod logs, any other related information

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Dec 22, 2021
@xom4ek
Copy link

xom4ek commented Jan 2, 2022

If you route traffic directly to the controller without using the proxy_protocol, nginx will set the real-ip to the $remote_addr. The $remote_addr contains 127.0.0.1 as value, as this is the local TCP proxy address.
In order to use the actual source ip connecting to the local TCP proxy, the following flags need to be added in the configuration:
enable-real-ip: true and forwarded-for-header: proxy_protocol

@maxisam
Copy link

maxisam commented Jan 25, 2022

I have this problem as well.

IMHO, this part is not really clear to show how to preserve IP Address

The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value >when use-forwarded-headers is enabled.

And thanks to @xom4ek. Mine is working correctly now.

My setup is MetalLB + Weave

@levimm
Copy link

levimm commented Mar 31, 2022

@xom4ek Does this config work for ingress object with ssl-passthrough annotation? It seems the configuration only fix the scenario for ingress without passthrough annotation.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2022
@aslafy-z
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 27, 2022
@aslafy-z
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 27, 2022
@rikatz
Copy link
Contributor

rikatz commented Oct 11, 2023

/close
There's not much other than spoken above, the current TLS Passthrough implementation passes the traffic to an internal process, that will then pass to NGINX running on localhost

@k8s-ci-robot
Copy link
Contributor

@rikatz: Closing this issue.

In response to this:

/close
There's not much other than spoken above, the current TLS Passthrough implementation passes the traffic to an internal process, that will then pass to NGINX running on localhost

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@letthefireflieslive
Copy link

If you route traffic directly to the controller without using the proxy_protocol, nginx will set the real-ip to the $remote_addr. The $remote_addr contains 127.0.0.1 as value, as this is the local TCP proxy address. In order to use the actual source ip connecting to the local TCP proxy, the following flags need to be added in the configuration: enable-real-ip: true and forwarded-for-header: proxy_protocol

Thanks @xom4ek this works for our AKS cluster!

I tried to dig deeper to understand more.. Please verify my understanding and to help other folks finding this page

We don't use proxy protocol, if this is used, we should not be needing additional nginx configuration. The local TCP proxy address is also the nginx ingress controller which handles the incoming TCP connnection for the nginx ingress controller pods.

forwarded-for-header: "proxy_protocol" tells nginx controller to look for the client's real IP in the proxy protocol headers instead of the usual X-Forwarded-For header. But this alone is not enough to achieve the goal.

enable-real-ip: "true" tells nginx to trust and use the IP supplied by either the X-Forwarded-For or the proxy protocol header to replace the orginal remote_addr and remote_port variables with the real client IP address which is set with forwarded-for-header: "proxy_protocol"

@bardiharborow
Copy link

If extra steps are required to recover client IPs after enabling SSL passthrough, I feel that this should be covered more clearly by the documentation.

@longwuyuan
Copy link
Contributor

If extra steps are required to recover client IPs after enabling SSL passthrough, I feel that this should be covered more clearly by the documentation.

PRs will be super welcome for improving docs. But there should be data somewhere showing real config on a real cluster with real curl commands and outputs with corresponding logs of controller showing results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests