Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When pod is recreated there is created a new CertificateRequest -> letsencrypts limits are exceeded very fast #75

Closed
kseniyashaydurova opened this issue Dec 16, 2021 · 4 comments

Comments

@kseniyashaydurova
Copy link

kseniyashaydurova commented Dec 16, 2021

Hi, we've met the following behavior:

All the time when the Pod is recreated (restarted) there is created a new CertificateRequest and Order. And it works like this even if csi.cert-manager.io/dns-names of the previously killed pod stays the same (i.e. we already created certificate for this domain).

This behavior doesn't seem to be convenient, cause there may be necessity to recreate pod a lot of times during development and it leads to exceeding rate limits to certificate authority very fast. For example, letsencrypt allows to create only 50 certificates per week for one domain (https://letsencrypt.org/docs/rate-limits/).

So what can we do in such a case to avoid certificate recreation all the time on pod restart? Or if we can't do anything with that, I suggest to make a feature to fix this behavior similarly to cert-manager way? For example in cert-manager, certificates are stored in kubernetes and if there is a new CertificateRequest for the domain, which already has certificate in K8S - it makes no real request to certificate authority and uses the one from K8S.

Example of configuring tls volume for pod:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  labels:
    app.kubernetes.io/component: backend
  annotations:
    argocd.argoproj.io/sync-wave: "10"
spec:
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app.kubernetes.io/component: backend
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: backend
    spec:
      securityContext:
        fsGroup: 1000

      containers:
        - name: nginx
          image: nginx:1.19.10
          imagePullPolicy: IfNotPresent
          command: ["nginx"]
          livenessProbe:
            tcpSocket:
              port: 443
          readinessProbe:
            tcpSocket:
              port: 443
            initialDelaySeconds: 60
            timeoutSeconds: 5
          ports:
            - name: https
              containerPort: 443
              protocol: TCP
          resources:
            limits:
              cpu: 200m
              memory: 200Mi
            requests:
              cpu: 100m
              memory: 100Mi
          volumeMounts:
            - name: tls
              mountPath: "/tls"

      volumes:
      - name: tls
        csi:
          readOnly: true
          driver: csi.cert-manager.io
          volumeAttributes:
            csi.cert-manager.io/issuer-kind: ClusterIssuer
            csi.cert-manager.io/issuer-name: letsencrypt-prod
            csi.cert-manager.io/dns-names:example.com
@munnerz
Copy link
Member

munnerz commented Dec 16, 2021

This is as-expected. The CSI driver works by submitting a CertificateRequest to the cert-manager API, which as you've noted will result in a new Order being made with the Let's Encrypt server.

Typically, users don't use the CSI driver for certificates from a public authority like this. Instead, some form of private authority (e.g. the 'CA', 'Vault', 'Venafi' or even 'selfsigned' issuer.

Given the CSI driver also generates the private key data upon startup, there isn't a way you could have a single re-usable certificate between pod restarts unless the private key were also to be stored somewhere too. At that point, as you've already mentioned, you're basically doing the same as using a 'Certificate' resource and storing that keypair in a Secret.

If you wanted to experiment in this space at a lower level, and possibly create your own private key & certificate distribution mechanisms, you could look at building your own CSI driver and Issuer using the csi-lib project. However this isn't something that we are aiming to support here at least.

@kseniyashaydurova
Copy link
Author

kseniyashaydurova commented Dec 16, 2021

@munnerz Ok! Thank you for clarifications! And one more question related to cert-manager.

We have a situation that we try to create an internal application, which has no outside 'Ingress', but has only 'Service' of internal Load Balancer type (i.e. we have a Load Balancer for our service, which sends traffic directly into Pod). Can we create Certificate manually in kubernetes for such entity (i.e. Service) like cert-manager does? If we can, would it be also auto-renewed with default cert-manager mechanism?

@munnerz
Copy link
Member

munnerz commented Dec 16, 2021

Yes, though you'll need to use something like DNS01 to validate you own the domain as cert-manager relies on manipulating Ingress resources to solve HTTP01 challenges :) see https://cert-manager.io/docs/configuration/acme/dns01/

If you've got more questions, it may be best to ask your question over in the #cert-manager channel on kubernetes.slack.com, where you'll hopefully get a lot more opinions/experiences to help you along the way 😄

@kseniyashaydurova
Copy link
Author

kseniyashaydurova commented Dec 16, 2021

Thank you so much! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants