Description
What happened:
We run an ingress with client certificate check.
annotations:
nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-int/mud-ca-cert
nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
nginx.ingress.kubernetes.io/auth-tls-match-cn: 'CN=api-haproxy-rcm-planner-int'
Clients with a certifcate matching the CN can access the ingress, clients with another CN or no certificate can't access - as expected.
If we change the value of nginx.ingress.kubernetes.io/auth-tls-match-cn
, the clients with the now not matching CN can still access. Clients with the new, matching CN don't have access. It looks like the Ingress is ignoring changes of the nginx.ingress.kubernetes.io/auth-tls-match-cn
value. After a controller restart, the ingress works as expected.
The changed annotations look like:
annotations:
nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-int/mud-ca-cert
nginx.ingress.kubernetes.io/auth-tls-verify-client: 'on'
nginx.ingress.kubernetes.io/auth-tls-verify-depth: '1'
nginx.ingress.kubernetes.io/auth-tls-match-cn: 'CN=NOMATCHapi-haproxy-rcm-planner-int'
What you expected to happen:
Changes of nginx.ingress.kubernetes.io/auth-tls-match-cn
are used by the ingress without controller restart.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.9.5
Build: f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
-------------------------------------------------------------------------------
Kubernetes version (use kubectl version
):
Client Version: v1.28.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6
Environment:
- Cloud provider or hardware configuration: kubeadm based vanilla Kubernetes on x86_64 virtual machines, using MetalLB loadbalancer
- OS (e.g. from /etc/os-release):
NAME="Oracle Linux Server"
VERSION="8.9"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.9"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.9"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:9:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://github.com/oracle/oracle-linux"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.9
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.9
- Kernel (e.g.
uname -a
):
Linux kint-m01 4.18.0-513.9.1.el8_9.x86_64 #1 SMP Thu Nov 30 15:31:16 PST 2023 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
- kubeadm
- Basic cluster related info:
kubectl version
Client Version: v1.28.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.6
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kint-e01 Ready <none> 630d v1.28.6 10.162.107.158 10.162.107.158 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-e02 Ready <none> 42d v1.28.6 10.162.107.58 10.162.107.58 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-i01 Ready <none> 687d v1.28.6 172.17.114.212 172.17.114.212 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-i02 Ready <none> 687d v1.28.6 172.17.114.213 172.17.114.213 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-m01 Ready control-plane 687d v1.28.6 172.17.114.209 172.17.114.209 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-m02 Ready control-plane 687d v1.28.6 172.17.114.210 172.17.114.210 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-m03 Ready control-plane 687d v1.28.6 172.17.114.211 172.17.114.211 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-s01 Ready <none> 687d v1.28.6 172.17.114.214 172.17.114.214 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w01 Ready <none> 687d v1.28.6 172.17.114.216 172.17.114.216 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w02 Ready <none> 687d v1.28.6 172.17.114.217 172.17.114.217 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w03 Ready <none> 687d v1.28.6 172.17.114.218 172.17.114.218 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w04 Ready <none> 687d v1.28.6 172.17.114.219 172.17.114.219 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
kint-w05 Ready <none> 86d v1.28.6 172.17.114.122 172.17.114.122 Oracle Linux Server 8.9 4.18.0-513.9.1.el8_9.x86_64 cri-o://1.28.3
- How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
- If helm was used then please show output of
ingress-nginx zkezone-nginx 5 2024-01-19 13:04:59.692826584 +0100 CET deployed ingress-nginx-4.9.0 1.9.5
- If helm was used then please show output of
helm -n <ingresscontrollernamespace> get values <helmreleasename>
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
patch:
tolerations:
- effect: NoSchedule
key: win.sbb.ch/external-worker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kint-e01
allowSnippetAnnotations: true
config:
ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-protocols: TLSv1.2 TLSv1.3
worker-processes: 4
ingressClass: zkezone
ingressClassByName: true
ingressClassResource:
controllerValue: k8s.io/ingress-zkezone
default: false
name: zkezone
metrics:
enabled: true
podLabels:
prometheus_monitoring: metricport
replicaCount: 1
service:
externalIPs:
- 10.162.107.158
type: ClusterIP
tolerations:
- effect: NoSchedule
key: win.sbb.ch/external-worker
updateStrategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
watchIngressWithoutClass: false
fullnameOverride: zkezone
- if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
All ingresses have the same version and are installed with helm- Default Ingress:
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
patch:
tolerations:
- effect: NoSchedule
key: win.sbb.ch/infrastructure-worker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: win.sbb.ch/infrastructure-worker
operator: Exists
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/instance
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
allowSnippetAnnotations: true
config:
ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-protocols: TLSv1.2 TLSv1.3
worker-processes: 3
ingressClass: nginx
ingressClassByName: true
ingressClassResource:
controllerValue: k8s.io/ingress-nginx
default: true
name: nginx
metrics:
enabled: true
podLabels:
prometheus_monitoring: metricport
replicaCount: 2
service:
loadBalancerIP: 172.17.114.245
type: LoadBalancer
tolerations:
- effect: NoSchedule
key: win.sbb.ch/infrastructure-worker
updateStrategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
watchIngressWithoutClass: true
- zke-zon2 Ingress:
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
patch:
tolerations:
- effect: NoSchedule
key: win.sbb.ch/external-worker
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kint-e02
allowSnippetAnnotations: true
config:
ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-protocols: TLSv1.2 TLSv1.3
worker-processes: 4
ingressClass: zkezon2
ingressClassByName: true
ingressClassResource:
controllerValue: k8s.io/ingress-zkezon2
default: false
name: zkezon2
metrics:
enabled: true
podLabels:
prometheus_monitoring: metricport
replicaCount: 1
service:
externalIPs:
- 10.162.107.58
type: ClusterIP
tolerations:
- effect: NoSchedule
key: win.sbb.ch/external-worker
updateStrategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
watchIngressWithoutClass: false
fullnameOverride: zkezon2
- Current State of the controller:
kubectl describe ingressclasses
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.9.5
helm.sh/chart=ingress-nginx-4.9.0
Annotations: ingressclass.kubernetes.io/is-default-class: true
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Controller: k8s.io/ingress-nginx
Events: <none>
Name: zkezon2
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.9.5
helm.sh/chart=ingress-nginx-4.9.0
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: zkezone2-nginx
Controller: k8s.io/ingress-zkezon2
Events: <none>
Name: zkezone
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.9.5
helm.sh/chart=ingress-nginx-4.9.0
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: zkezone-nginx
Controller: k8s.io/ingress-zkezone
Events: <none>
- `kubectl -n <ingresscontrollernamespace> get all -A -o wide`
- `kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>`
- `kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>`
- Current state of ingress object, if applicable:
kubectl -n <appnamespace> get all,ing -o wide
AME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/zkezone-controller-857d75cdb5-mhkt6 1/1 Running 0 39m 172.17.230.62 kint-e01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/zkezone-controller ClusterIP 172.17.46.57 10.162.107.158 80/TCP,443/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/zkezone-controller-admission ClusterIP 172.17.46.162 <none> 443/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/zkezone-controller-metrics ClusterIP 172.17.47.59 <none> 10254/TCP 311d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/zkezone-controller 1/1 1 1 311d controller registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/zkezone-controller-5d4b6b89d6 0 0 0 311d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5d4b6b89d6
replicaset.apps/zkezone-controller-64446b4f46 0 0 0 311d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=64446b4f46
replicaset.apps/zkezone-controller-7f9c8d4f5d 0 0 0 42d controller registry.k8s.io/ingress-nginx/controller:v1.5.1@sha256:4ba73c697770664c1e00e9f968de14e08f606ff961c76e5d7033a4a9c593c629 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7f9c8d4f5d
replicaset.apps/zkezone-controller-84449496db 0 0 0 6d20h controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84449496db
replicaset.apps/zkezone-controller-857d75cdb5 1 1 1 5d23h controller registry.k8s.io/ingress-nginx/controller:v1.9.5@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=857d75cdb5
kubectl -n <appnamespace> describe ing <ingressname>
Name: rcm-planner-backend-1
Labels: app=rcm-planner-backend
win.sbb.ch/argo-appname=rcm-planner-backend-test
Namespace: rcm-planner-backend-test
Address: 172.17.46.57
Ingress Class: zkezone
Default backend: <default>
TLS:
api-tls-1 terminates api-rcm-planner-test-1.mud.sbb.ch
Rules:
Host Path Backends
---- ---- --------
api-rcm-planner-test-1.mud.sbb.ch
/health rcm-planner-backend-health:8081 (172.17.235.193:8081)
/ rcm-planner-backend:8080 (172.17.235.193:8080)
Annotations: nginx.ingress.kubernetes.io/auth-tls-match-cn: CN=api-haproxy-rcm-planner-test
nginx.ingress.kubernetes.io/auth-tls-secret: rcm-planner-backend-test/mud-ca-cert
nginx.ingress.kubernetes.io/auth-tls-verify-client: on
nginx.ingress.kubernetes.io/auth-tls-verify-depth: 1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 47m (x8 over 5d) nginx-ingress-controller Scheduled for sync
Normal Sync 42m (x2 over 44m) nginx-ingress-controller Scheduled for sync
Normal Sync 41m nginx-ingress-controller Scheduled for sync
-
If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
Clients get a 403 HTTP code -
Others:
- Any other related information like ;
When applying the change of thenginx.ingress.kubernetes.io/auth-tls-match-cn
value, we observe the following controller log. The log covers a section where we changed the value from an invalid CN to the valid one. The clients still get a 403 response even after the reload. After restarting the controller, we see 200 responses only
- Any other related information like ;
2024-01-25T11:46:33.545829457+01:00 10.162.107.158 - - [25/Jan/2024:10:46:33 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.003 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.003 200 5202192cf926110ba82ba54d7ec2140c
2024-01-25T11:46:35.272618809+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 bcd8dc4e532280f5a647ceaa52ff2413
2024-01-25T11:46:35.370921478+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 5e26a2ff61e6eb9be2cc16e9d420af80
2024-01-25T11:46:35.471442805+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 539b356ccfdcaeec3c70e3e1acbf54aa
2024-01-25T11:46:35.562090216+01:00 10.162.107.158 - - [25/Jan/2024:10:46:35 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 bd5698cd0b87794e932cd3ca547a2402
2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061411 2 admission.go:149] processed ingress via admission controller {testedIngressLength:3 testedIngressTime:0.048s renderingIngressLength:3 renderingIngressTime:0.001s admissionTime:43.7kBs testedConfigurationSize:0.049}
2024-01-25T11:46:37.061495284+01:00 I0125 10:46:37.061440 2 main.go:107] "successfully validated configuration, accepting" ingress="rcm-planner-backend-int/rcm-planner-backend-1"
2024-01-25T11:46:37.067445757+01:00 I0125 10:46:37.067363 2 event.go:298] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"rcm-planner-backend-int", Name:"rcm-planner-backend-1", UID:"d40ac8ba-727e-4f6e-a8d3-1f810433a0e6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"333519916", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
2024-01-25T11:46:37.289921287+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.001 200 98b18a4c01e2b4f55a6b933abf41bdb7
2024-01-25T11:46:37.385177321+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 0f1aee2886f9979c7aa3486f21856311
2024-01-25T11:46:37.486010730+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 2c38b854643beeb78415af20c7ca6f72
2024-01-25T11:46:37.579083317+01:00 10.162.107.158 - - [25/Jan/2024:10:46:37 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.001 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 9165a32925096b35074add6584ace962
2024-01-25T11:46:39.306439195+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 71222835bb2ca7276a26da0fc9917f63
2024-01-25T11:46:39.402701009+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - 93fa490adbca2a14b346efa3050532f3
2024-01-25T11:46:39.502599651+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 403 31 "-" "-" 84 0.000 [-] [] - - - - ead272c7895f7f12b35294588ac4aded
2024-01-25T11:46:39.598526523+01:00 10.162.107.158 - - [25/Jan/2024:10:46:39 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 25c83424ef1178d99ab1019ad0bedcbd
2024-01-25T11:46:41.323488518+01:00 10.162.107.158 - - [25/Jan/2024:10:46:41 +0000] "GET /health/ HTTP/1.0" 200 49 "-" "-" 85 0.002 [rcm-planner-backend-test-rcm-planner-backend-health-8081] [] 172.17.235.193:8081 60 0.002 200 cf105869e968dabe93cd9a3531852bec
How to reproduce this issue:
- Install an ingress
- Active client certifcate check using the annotations as described
- Use a client with valid CN
- Change the value of
nginx.ingress.kubernetes.io/auth-tls-match-cn
to something different than the valid CN - Check if the client can still access the ingress.
Anything else we need to know:
No.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status