Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

checkOverlap is a little loose. #11366

Closed
vincentwu0101 opened this issue May 16, 2024 · 6 comments
Closed

checkOverlap is a little loose. #11366

vincentwu0101 opened this issue May 16, 2024 · 6 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@vincentwu0101
Copy link

vincentwu0101 commented May 16, 2024

What happened:
Update an ingress rule so that host+path is equal to an existing ingress rule, and at this point, the checkOverlap of the admission does not detect any overlap.

ingress infos ![after ingress is updated,kubectl get ingress](https://github.com/kubernetes/ingress-nginx/assets/34619666/edf79ae1-72bc-42dd-ac40-d6d6bd1902b0) ![helloworld ingress is created, showing it info](https://github.com/kubernetes/ingress-nginx/assets/34619666/2484e73a-7cf1-4c24-b375-292be4252559) ![httpbin, order is create, apply helloworld which is same host+(http) path](https://github.com/kubernetes/ingress-nginx/assets/34619666/d7053ec1-2450-4d5d-b211-bd7fb07b7744)

What you expected to happen:
When the same host+path appears in rules without a canary, the update of that ingress should be blocked.

What do you think went wrong?:
The path judgment under rules is too hasty.

Defective code source code place: ``` // same ingress for _, existing := range existingIngresses { if existing.ObjectMeta.Namespace == ing.ObjectMeta.Namespace && existing.ObjectMeta.Name == ing.ObjectMeta.Name { return nil } } ``` you can read them by this link: https://github.com/kubernetes/ingress-nginx/blob/51847ac1b537c547cdb7bfb06d14e6d3d8476a73/internal/ingress/controller/controller.go#L1815

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

NGINX Ingress controller version NGINX Ingress controller Release: v1.9.6 Build: 6a73aa3 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.21.6

In fact, this issue exists in versions from controller v1.3.1 to controller v1.10.1.

**Kubernetes version** (use `kubectl version`): kubectl version --short Client Version: v1.21.14 Server Version: v1.21.14

Environment:

It is not related to hardware, software, or kernel.

  • How was the ingress-nginx-controller installed:
    helm ls -A |grep -i ingress:
    ingress-nginx-1715823378 default 3 2024-05-17 09:41:04.638418096 +0800 CST deployed ingress-nginx-4.2.5 1.3.1
    kubesphere-router-tableware-ingress kubesphere-controls-system 1 2024-03-20 11:03:19.310420115 +0800 CST deployed ingress-nginx-4.0.13 1.1.0

[root@ ~]# helm get values ingress-nginx-1715823378
USER-SUPPLIED VALUES:
null

ingress-nginx-1715823378-controller args: kubectl get deployments.apps ingress-nginx-1715823378-controller -o yaml | head -n 70 |tail -n 26 spec: containers: - args: - /nginx-ingress-controller - --publish-service=$(POD_NAMESPACE)/ingress-nginx-1715823378-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --ingress-class=nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-1715823378-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key - --v=3 env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so I manually changed watchIngressWithoutClass: true and extraArgs: v: 3
ingress-nginx-1715823378-controller status and validatingwebhookconfiguration kubectl get po,validatingwebhookconfigurations.admissionregistration.k8s.io -l app.kubernetes.io/instance=ingress-nginx-1715823378 NAME READY STATUS RESTARTS AGE pod/ingress-nginx-1715823378-controller-6c8cb66658-nk9vb 1/1 Running 0 62m

NAME WEBHOOKS AGE
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-1715823378-admission 1 25h

helm -n kubesphere-controls-system get values kubesphere-router-tableware-ingress helm -n kubesphere-controls-system get values kubesphere-router-tableware-ingress USER-SUPPLIED VALUES: controller: addHeaders: {} admissionWebhooks: annotations: {} certificate: /usr/local/certificates/cert createSecretJob: resources: {} enabled: false existingPsp: "" failurePolicy: Fail key: /usr/local/certificates/key labels: {} namespaceSelector: {} objectSelector: {} patch: enabled: true image: digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 image: ingress-nginx/kube-webhook-certgen pullPolicy: IfNotPresent registry: k8s.gcr.io tag: v1.1.1 labels: {} nodeSelector: kubernetes.io/os: linux podAnnotations: {} priorityClassName: "" runAsUser: 2000 tolerations: [] patchWebhookJob: resources: {} port: 8443 service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 443 type: ClusterIP affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - ingress-nginx - key: app.kubernetes.io/instance operator: In values: - kubesphere-router-tableware-ingress - key: app.kubernetes.io/component operator: In values: - controller topologyKey: kubernetes.io/hostname weight: 100 allowSnippetAnnotations: true annotations: servicemesh.kubesphere.io/enabled: "false" autoscaling: behavior: {} enabled: false maxReplicas: 11 minReplicas: 1 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 autoscalingTemplate: [] config: faf: qewq configAnnotations: {} configMapNamespace: "" containerName: controller containerPort: http: 80 https: 443 customTemplate: configMapKey: "" configMapName: "" dnsConfig: {} dnsPolicy: ClusterFirst electionID: ingress-controller-leader-kubesphere-router-tableware enableMimalloc: true existingPsp: "" extraArgs: {} extraContainers: [] extraEnvs: [] extraInitContainers: [] extraVolumeMounts: [] extraVolumes: [] healthCheckHost: "" healthCheckPath: /healthz hostNetwork: false hostPort: enabled: false ports: http: 80 https: 443 hostname: {} image: allowPrivilegeEscalation: true digest: "" image: ingress-nginx/controller pullPolicy: IfNotPresent registry: k8s.gcr.io repository: registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller runAsUser: 101 tag: v1.1.0 ingressClass: nginx ingressClassByName: false ingressClassResource: controllerValue: k8s.io/ingress-nginx default: false enabled: false name: nginx parameters: {} keda: apiVersion: keda.sh/v1alpha1 behavior: {} cooldownPeriod: 300 enabled: false maxReplicas: 11 minReplicas: 1 pollingInterval: 30 restoreToOriginalReplicaCount: false scaledObject: annotations: {} triggers: [] kind: Deployment labels: {} lifecycle: preStop: exec: command: - /wait-shutdown livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 maxmindLicenseKey: "" metrics: enabled: true port: 10254 prometheusRule: additionalLabels: {} enabled: false rules: [] service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 10254 type: ClusterIP serviceMonitor: additionalLabels: {} enabled: true metricRelabelings: [] namespace: "" namespaceSelector: {} relabelings: [] scrapeInterval: 30s targetLabels: [] minAvailable: 1 minReadySeconds: 0 name: "" nodeSelector: kubernetes.io/os: linux podAnnotations: sidecar.istio.io/inject: false podLabels: {} podSecurityContext: {} priorityClassName: "" proxySetHeaders: {} publishService: enabled: true pathOverride: "" readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 replicaCount: 1 reportNodeInternalIp: false resources: requests: cpu: 100m memory: 90Mi scope: enabled: true namespace: tableware namespaceSelector: "" service: annotations: service.beta.kubernetes.io/qingcloud-load-balancer-eip-ids: "" service.beta.kubernetes.io/qingcloud-load-balancer-type: "0" appProtocol: true enableHttp: true enableHttps: true enabled: true external: enabled: true externalIPs: [] internal: annotations: {} enabled: false loadBalancerSourceRanges: [] ipFamilies: - IPv4 ipFamilyPolicy: SingleStack labels: {} loadBalancerSourceRanges: [] nodePorts: http: "" https: "" tcp: {} udp: {} ports: http: 80 https: 443 targetPorts: http: http https: https type: LoadBalancer sysctls: {} tcp: annotations: {} configMapNamespace: "" terminationGracePeriodSeconds: 300 tolerations: [] topologySpreadConstraints: [] udp: annotations: {} configMapNamespace: "" updateStrategy: {} watchIngressWithoutClass: true defaultBackend: affinity: {} autoscaling: annotations: {} enabled: false maxReplicas: 2 minReplicas: 1 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 containerSecurityContext: {} enabled: false existingPsp: "" extraArgs: {} extraEnvs: [] extraVolumeMounts: [] extraVolumes: [] image: allowPrivilegeEscalation: false image: defaultbackend-amd64 pullPolicy: IfNotPresent readOnlyRootFilesystem: true registry: k8s.gcr.io runAsNonRoot: true runAsUser: 65534 tag: "1.5" labels: {} livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 minAvailable: 1 name: defaultbackend nodeSelector: kubernetes.io/os: linux podAnnotations: {} podLabels: {} podSecurityContext: {} port: 8080 priorityClassName: "" readinessProbe: failureThreshold: 6 initialDelaySeconds: 0 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 replicaCount: 1 resources: {} service: annotations: {} externalIPs: [] loadBalancerSourceRanges: [] servicePort: 80 type: ClusterIP serviceAccount: automountServiceAccountToken: true create: true name: "" tolerations: [] dhParam: null fullnameOverride: kubesphere-router-tableware imagePullSecrets: [] podSecurityPolicy: enabled: false rbac: create: true scope: false revisionHistoryLimit: 10 serviceAccount: automountServiceAccountToken: true create: true name: "" tcp: {} udp: {}

Perhaps there is a validatingwebhookconfiguration in the ingress nginx controller of kubesphere, or maybe not. I have checked through kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io.
[root@ ~]# kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io
NAME WEBHOOKS AGE
ingress-nginx-1715823378-admission 1 25h
istio-validator-1-11-2-istio-system 2 244d
network.kubesphere.io 1 244d
notification-manager-validating-webhook 2 244d
openelb-admission 1 203d
resourcesquotas.quota.kubesphere.io 1 244d
storageclass-accessor.storage.kubesphere.io 1 244d
users.iam.kubesphere.io 1 244d

  • Current State of the controller:
    [root@ ~]# kubectl get po,svc -n default |grep ingress
    pod/ingress-nginx-1715823378-controller-6c8cb66658-nk9vb 1/1 Running 0 73m
    service/ingress-nginx-1715823378-controller LoadBalancer 10.233.2.194 80:30655/TCP,443:30108/TCP 25h
    service/ingress-nginx-1715823378-controller-admission ClusterIP 10.233.46.163 443/TCP 25h

[root@~]# kubectl get po,svc -n kubesphere-controls-system |grep route
pod/kubesphere-router-tableware-555889b574-9n4hq 1/1 Terminating 0 20m
pod/kubesphere-router-tableware-555889b574-m296m 0/1 ContainerCreating 0 3m17s
service/kubesphere-router-tableware LoadBalancer 10.233.8.94 80:32080/TCP,443:31067/TCP 58d
service/kubesphere-router-tableware-metrics ClusterIP 10.233.30.197 10254/TCP 58d

only ingress-nginx-1715823378 in ns default is running normally, while ingress under kubesphere-controls-system ns is not running properly.

  • Current state of ingress object, if applicable:
    after ingress is updated,kubectl get ingress
    helloworld ingress is created, showing it info
    httpbin, order is create, apply helloworld which is same host+(http) path

  • Others:

    • Any other related information like ;
      • copy/paste of the snippet (if applicable):not use snippet
      • kubectl describe ... of any custom configmap(s) created and in use: no content

How to reproduce this issue:

Install minikube/kind

I am using a cluster created by Kubeadm. Sorry, I don't have much time to use kind and minicup to create. And I think just having a k8s cluster is enough.

Install the ingress controller

I manually modified the watchIngressWithoutClass: true in charts/expression nginx/values. yaml.

helm install --generate-name ./ingress-nginx/ --debug

Install an application that will act as default backend (is just an echo app)

not use.

Create an ingress (please add any additional annotation required)

echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: z-helloworld-1
namespace: default
spec:
rules:

  • host: helloworld.test.io
    http:
    paths:
    • backend:
      service:
      name: not-important-1
      port:
      number: 5000
      path: /fff
      pathType: ImplementationSpecific

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: z-httpbin-1
namespace: default
spec:
rules:

  • host: httpbin.test.io
    http:
    paths:
    • backend:
      service:
      name: not-important-1
      port:
      number: 8000
      path: /
      pathType: Prefix
      " | kubectl apply -f -

then
echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: z-httpbin-1
namespace: default
spec:
rules:

  • host: httpbin.test.io
    http:
    paths:
    • backend:
      service:
      name: not-important-1
      port:
      number: 8000
      path: /
      pathType: Prefix
  • host: helloworld.test.io
    http:
    paths:
    • backend:
      service:
      name: not-important-1
      port:
      number: 5000
      path: /fff
      pathType: ImplementationSpecific
      " | kubectl apply -f -

make a request

It's not important, what's important is that the subsequent update of z-hitpbin-1 will be successful.

Anything else we need to know:

/kind bug

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 16, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-priority labels May 16, 2024
@vincentwu0101 vincentwu0101 changed the title heckOverlap is a little loose. checkOverlap is a little loose. May 16, 2024
@vincentwu0101
Copy link
Author

/kind bug

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels May 16, 2024
@vincentwu0101
Copy link
Author

vincentwu0101 commented May 16, 2024

There is a solution, which is to place the newly added rule at the position of rule [0] and move the original rule [0] to rule [1], similar to the head insertion method.
Adding one rule, admission can detect conflicts, but when adding multiple rules, if the first one is new, then there is still a problem.

@longwuyuan
Copy link
Contributor

/remove-kind bug
/kind support

Hi,

Pasting couple of yaml files and writing couple of short sentences is not enough for the readers to analyze and triage a problem or a bug. It causes readers to guess a lot and assume multiple factors.

If you can look at the template of a new bug report and edit your issue description to answer those questions, it may provide readers with more data to analyze.

Actually please run the tests in a minikube or a kind cluster. And copy/paste the data that is related to the problem so that readers do not have to guess the config and environment. If your tests and the data you post from your tests shows the real state of the cluster and the resources in the cluster (using kubectl and curl commands), it will reduce the time for a developer to reproduce the problem and look for a solution.

/triage needs-information

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. and removed kind/bug Categorizes issue or PR as related to a bug. labels May 16, 2024
@longwuyuan
Copy link
Contributor

The problem description you provided does not contain the information that is asked in a new bug report template. Your issue description also does not provide any outputs of kubectl commands or curl commands, showing a real live cluster and the resources in the cluster or the response to HTTP requests or the logs of the controller pod for those requests.

So a reader must guess and assume a lot of aspects and that is not helpful as the time and resources are short. Also not many people are reporting the same issue.

So please create a cluster in kind or minikube and do the testing of what you want to report. Make sure to copy/paste all the information from your tests here that shows the bug or the problem. Please include the output of kubectl describe commands and logs and curl command outputs with -v.. This will help a reader to confirm the problem and then it will also help others to reproduce the problem. It will also help verify the solution you will test in your cluster.

Once you have posted that information here in the issue description, please feel free to re-open this issue. There are too many open isuses with no real information to take action on and it becomes harder to track actionable bug reports. So I will close this issue now for you to re-open after you have posted the information requested.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

The problem description you provided does not contain the information that is asked in a new bug report template. Your issue description also does not provide any outputs of kubectl commands or curl commands, showing a real live cluster and the resources in the cluster or the response to HTTP requests or the logs of the controller pod for those requests.

So a reader must guess and assume a lot of aspects and that is not helpful as the time and resources are short. Also not many people are reporting the same issue.

So please create a cluster in kind or minikube and do the testing of what you want to report. Make sure to copy/paste all the information from your tests here that shows the bug or the problem. Please include the output of kubectl describe commands and logs and curl command outputs with -v.. This will help a reader to confirm the problem and then it will also help others to reproduce the problem. It will also help verify the solution you will test in your cluster.

Once you have posted that information here in the issue description, please feel free to re-open this issue. There are too many open isuses with no real information to take action on and it becomes harder to track actionable bug reports. So I will close this issue now for you to re-open after you have posted the information requested.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
Development

No branches or pull requests

3 participants