Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LimitRange added in 0.26.2 breaks default cert-manager Issuer configuration #5210

Closed
edmorley opened this issue Mar 4, 2020 · 7 comments · Fixed by #5314
Closed

LimitRange added in 0.26.2 breaks default cert-manager Issuer configuration #5210

edmorley opened this issue Mar 4, 2020 · 7 comments · Fixed by #5314
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects

Comments

@edmorley
Copy link

edmorley commented Mar 4, 2020

Hi :-)

NGINX Ingress controller version: master (currently at 99419c7)

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • OS: OS X 10.14.6
  • Others: Docker Desktop for Mac v2.2.2.0 (bundles Kubernetes 1.16.5)

What happened:

In #4843 (released in v0.26.2), a new LimitRange was applied to the ingress-nginx in order to fix #4735. This LimitRange causes cert-manager Issuers to fail in their default configuration with errors like:

$ kubectl logs cert-manager-64b6c865d9-nrgs6 -n cert-manager --tail=10 | grep forbidden
E0304 19:47:13.991764       1 controller.go:131] cert-manager/controller/challenges "msg"="re-queuing item  due to error processing" "error"="pods \"cm-acme-http-solver-lqh4d\" is forbidden: [minimum cpu usage per Container is 100m, but request is 10m, minimum memory usage per Container is 90Mi, but request is 64Mi]" "key"="ingress-nginx/example-com-171095458-1482957574-2748057784"
$ kubectl describe challenges -A
Name:         example-com-171095458-1482957574-2748057784
Namespace:    ingress-nginx
...
Events:
  Type     Reason        Age                     From          Message
  ----     ------        ----                    ----          -------
  Normal   Started       25m                     cert-manager  Challenge scheduled for processing
  Warning  PresentError  25m                     cert-manager  Error presenting challenge: pods "cm-acme-http-solver-gvhtg" is forbidden: [minimum cpu usage per Container is 100m, but request is 10m, minimum memory usage per Container is 90Mi, but request is 64Mi]

The Certificate (and the pod created by the Issuer) has to be created in the ingress-nginx namespace, so that the TLS secret is created in that namespace (since ingress-nginx will need to access the secret).

If the LimitRange is deleted (eg kubectl delete limitrange ingress-nginx -n ingress-nginx), these pod scheduling errors go away.

What you expected to happen:

For the default ingress-nginx configuration to work with the default cert-manager Issuer configuration's pod request limits.

Specifically, cert-manager appears to already do the right thing by setting a pod resource request limits - and in fact the requested resources are actually less intensive than those in the LimitRange, which seems like a good thing, not something that should be prevented?

How to reproduce it:

  1. Install/start Kubernetes via Docker for Mac
  2. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
  3. Wait a few minutes until kubectl get deployment cert-manager-webhook -n cert-manager reports the webhook as available (takes a while due to cert-manager-webhook deployment tooks too long to start.. secret "cert-manager-webhook-tls" not found cert-manager/cert-manager#2537).
  4. kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.13.1/cert-manager.yaml
  5. Create a ClusterIssuer (per cert-manager docs):
    (Making sure to substitute in a valid email address)
    echo "
      apiVersion: cert-manager.io/v1alpha2
      kind: ClusterIssuer
      metadata:
        name: cert-issuer
      spec:
        acme:
          email: *YOUR EMAIL HERE*
          server: https://acme-staging-v02.api.letsencrypt.org/directory
          privateKeySecretRef:
            name: example-issuer-account-key
          solvers:
          - http01:
              ingress:
                class: nginx
    " | kubectl apply -f -
  6. Create a Certificate (per cert-manager docs):
    echo "
      apiVersion: cert-manager.io/v1alpha2
      kind: Certificate
      metadata:
        name: example-com
        namespace: ingress-nginx
      spec:
        secretName: example-com-tls
        dnsNames:
        - example.com
        issuerRef:
          name: cert-issuer
          kind: ClusterIssuer
    " | kubectl apply -f -
  7. kubectl describe challenges -A

/kind bug

@edmorley edmorley added the kind/bug Categorizes issue or PR as related to a bug. label Mar 4, 2020
@aledbf
Copy link
Member

aledbf commented Mar 4, 2020

@edmorley thank you for the report.

We will adjust the LimitRange to apply only to the ingress controller pod, not everything in the same namespace.

Edit: that said, I run several clusters using ingress-nginx and cert-manager without this problem.

@aledbf
Copy link
Member

aledbf commented Mar 4, 2020

@edmorley why are you creating a Certificate in the same namespace than the ingress controller?

@edmorley
Copy link
Author

edmorley commented Mar 5, 2020

@aledbf Hi! Thank you for the fast reply :-)

We are creating the Certificate in the same namespace as the ingress controller, since we use the --default-ssl-certificate option with the controller, and my understanding was that pods can only access secrets in the same namespace as the pod. As such, the secret needs to be created in the ingress-nginx namespace, which means the Certificate needs to be in that namespace too.

Or is that not correct?

One of the problems is that there is a gap between the topics covered by the ingress-nginx and cert-manager docs -- and "how one should make use of namespaces when using both" doesn't seem to be covered anywhere? :-)

@aledbf
Copy link
Member

aledbf commented Mar 5, 2020

Or is that not correct?

This applies to secrets referenced by Ingresses. The default SSL certificate is a special case. You can put that certificate in any certificate. That said, this is still a bug.

@alpeb
Copy link
Contributor

alpeb commented Apr 1, 2020

Hi,

I just wanted to point out this is affecting Linkerd as well, which attempts to inject an init-container in the ingress-nginx controller pod. It only requests 10m CPU and 10Mi Memory, so the injection is refused given this limitrange. I'm guessing other service meshes and other projects using sidecars are having the same problem.

@aledbf
Copy link
Member

aledbf commented Apr 1, 2020

The next version removes the limitrange, adding the resource definition in the yaml files.
WIP of the new deployment files is #5313

@aledbf aledbf added this to issues in 0.31.0 Apr 1, 2020
@albertsebastian1
Copy link

kubectl get LimitRange -n <namespace_name>
kubectl edit LimitRange <LimitRange_name> -n <namespace_name>

change the minimum limit to a lower value below the failing threshold, recreate the cert-manager CRD after clearing existing ones

spend some good amount of time for this : cert-manager Error presenting challenge: pods "cm-acme-http-solver-jkvzm" is forbidden: [minimum cpu usage per Container is 100m, but request is 10m, minimum memory usage per Container is 90Mi, but request is 64Mi]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
No open projects
0.31.0
  
issues
Development

Successfully merging a pull request may close this issue.

4 participants