Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document HTTPS with the built-in Traefik (LetsEncrypt and existing certs) #117

Closed
flxs opened this issue Mar 2, 2019 · 43 comments
Closed
Assignees
Labels
kind/documentation Improvements or additions to documentation
Milestone

Comments

@flxs
Copy link

flxs commented Mar 2, 2019

Is your feature request related to a problem? Please describe.
I can't seem to find a way to get existing certs into the container, or to allow LetsEncrypt certificates to survive pod termination, other than writing my own Traefik deployment to add a PersistentVolume or deploying Consul alongside. It would be neat to have documentation on the "proper" way of doing this (I assume there is one, and I'm just not knowledgeable enough about Kubernetes to find it).

Describe the solution you'd like
Documentation covering HTTPS with the built-in Traefik, preferably with existing certificates and with LetsEncrypt.

Describe alternatives you've considered
I could disable the built-in Traefik and roll my own, or run Consul alongside, but both seem like a lot of effort for something that feels like a base requirement in a great many use cases.

Additional context
None

@ibuildthecloud
Copy link
Contributor

Have you tried https://github.com/jetstack/cert-manager?

@flxs
Copy link
Author

flxs commented Mar 3, 2019

That looks very helpful, especially with multi-node-node clusters; having certs in secrets definitely makes distributing them easier, but I'm still unsure how to get those k8s-secrets into the built-in Traefik. Would I have to disable the built-in Traefik and deploy my own to mount the certificate secrets into the container? I can't think of another way to do it, am I missing something obvious?

@flxs
Copy link
Author

flxs commented Mar 4, 2019

Ok, so cert-manager seems to be the way to go, and there seems to be a way of handing certificates into Traefik by means of Ingress attributes, as described in the Traefik docs.

I can't get cert-manager to work on k3s, though. The instructions in the cert-manager docs for installing via helm chart leave me with a cert-manager-webhook pod saying Error: configmaps "extension-apiserver-authentication" not found. Applying an issuer manifest fails due to the webhook not being reachable.

@aaronkjones
Copy link

I attempted the same and my failure led me here. I will try with "--no-deploy traefik".

@epicfilemcnulty
Copy link
Contributor

@flxs For what it's worth, you can install cert-manager without webhook: helm install --name cert-manager --namespace cert-manager stable/cert-manager --set webhook.enabled=false.

@aaronkjones
Copy link

aaronkjones commented Mar 8, 2019

FYI this does not seem to be armhf compatible.
An arm image exists: quay.io/jetstack/cert-manager-controller-arm

@mashedcode
Copy link

mashedcode commented Mar 14, 2019

Since cert-manager is an essential component I wonder how I'm supposed to get my Let's Encrypt stuff working on k3s without it.
IIUC cert-manager with webhook does not work since authorization/v1beta1 got removed from k3s.

I've got no clue what the implications of running it without webhook are but the config below does add the webhook too so I must have certainly miss-configured it.

apiVersion: k3s.cattle.io/v1
kind: HelmChart
metadata:
  name: cert-manager
  namespace: kube-system
spec:
  chart: stable/cert-manager
  set:
    webhook.enabled: "false"

Anyone got a working workaround? Blog post appreciated.

@erikwilson erikwilson added kind/documentation Improvements or additions to documentation help wanted labels Mar 25, 2019
@lentzi90
Copy link

I am successfully running cert-manager from the jetstack helm repository. The chart is located here.

To get it to work on arm64 I'm using these values:

image:
  repository: quay.io/jetstack/cert-manager-controller-arm64
webhook:
  image:
    repository: quay.io/jetstack/cert-manager-webhook-arm64
cainjector:
  image:
    repository: quay.io/jetstack/cert-manager-cainjector-arm64
extraArgs:
  - --acme-http01-solver-image=quay.io/jetstack/cert-manager-acmesolver-arm64:v0.7.0

The extraArgs part is needed because of this issue and should not be needed with the next release.

@giovannicandido
Copy link

Criating a ClusterIssuer produces error:

Error from server (InternalError): error when creating "issuer.yaml": Internal error occurred: failed calling webhook "clusterissuers.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: "/apis/admission.certmanager.k8s.io/v1beta1/clusterissuers": the server could not find the requested resource") has prevented the request from succeeding

Configs:

kind: HelmChart
metadata:
  name: cert-manager
  namespace: kube-system
spec:
  chart: cert-manager
  version: v0.7.0
  targetNamespace: cert-manager
  repo: https://charts.jetstack.io
  set:
    ingressShim.defaultIssuerName: letsencrypt-prod
    ingressShim.defaultIssuerKind: ClusterIssuer
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
  namespace: cert-manager
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: user@domain.com
    privateKeySecretRef:
      name: letsencrypt-staging
    http01: {}

Changing from ClusterIssuer to normal Issuer works

@giovannicandido
Copy link

cert-manager webkook logs:

logging error output: "Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1?timeout=32s\": the server could not find the requested resource\n"
 [k3s/v1.13.5 (linux/amd64) kubernetes/256ea73 10.42.0.1:58750]
I0402 20:59:59.820279       1 request.go:942] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/apis/admission.certmanager.k8s.io/v1beta1","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
I0402 20:59:59.820401       1 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjZXJ0LW1hbmFnZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiY2VydC1tYW5hZ2VyLXdlYmhvb2stdG9rZW4tMnhnOXEiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2VydC1tYW5hZ2VyLXdlYmhvb2siLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2OWMyZjE3MC01NTg5LTExZTktYmU3ZS1lMjc5ZDY0Nzg1MGIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2VydC1tYW5hZ2VyOmNlcnQtbWFuYWdlci13ZWJob29rIn0.HNaElcVENOmrGRV6qsAVUi9gT-h3vWMPbyZZyVPqPNxpsAWRonOHHEFGgha_fDfXj0ZEWU1wE8gfzY19u3ZwdNAsnr4k_5WzWd5-O6MemIPxxCtvjubw-KFp4_X0Y42U3xDID6Joa1INA3xMyhizIozUBVzaoNj1Hx0dl8uyGM7FtjtDgM1cT2RQVDp8LFsyVVWMTTRxGXL-E8JWzzGulEPMCsYlNwOoJ3Dq7NpDR0ONB4tEbde6k0EbMsXvXbUV1Kj9zBSHU3pN-KiYrauACI5yAWwBWO6O9WOK1wpRSFbiyj8L4Ez0Dan5b9P8x1Q50VyZzt8hpoBs8JDZZmKOEw" -H "User-Agent: image.app_linux-amd64.binary/v0.0.0 (linux/amd64) kubernetes/$Format" 'https://10.43.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews'
I0402 20:59:59.821630       1 round_trippers.go:438] POST https://10.43.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews 404 Not Found in 1 milliseconds
I0402 20:59:59.821644       1 round_trippers.go:444] Response Headers:
I0402 20:59:59.821648       1 round_trippers.go:447]     Content-Length: 174
I0402 20:59:59.821652       1 round_trippers.go:447]     Content-Type: application/json
I0402 20:59:59.821656       1 round_trippers.go:447]     Date: Tue, 02 Apr 2019 20:59:59 GMT
I0402 20:59:59.821696       1 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server could not find the requested resource","reason":"NotFound","details":{},"code":404}
E0402 20:59:59.821789       1 webhook.go:192] Failed to make webhook authorizer request: the server could not find the requested resource
E0402 20:59:59.821941       1 errors.go:77] the server could not find the requested resource
I0402 20:59:59.821969       1 wrap.go:47] GET /apis/admission.certmanager.k8s.io/v1beta1?timeout=32s: (1.915603ms) 500
goroutine 428 [running]:
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0003962a0, 0x1f4)
        vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:204 +0xd2
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0003962a0, 0x1f4)
        vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:183 +0x35
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00023fe00, 0x1f4)
        vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:205 +0xaf
net/http.Error(0x7f2b1ceea2c0, 0xc00000c320, 0xc0000886c0, 0x81, 0x1f4)
        GOROOT/src/net/http/server.go:1976 +0xda
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.InternalError(0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00, 0x7f2b1ceea440, 0xc0002e80e0)
        vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/errors.go:75 +0x126
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00)
        vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:69 +0x1ed
net/http.HandlerFunc.ServeHTTP(0xc0003dea80, 0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00)
        GOROOT/src/net/http/server.go:1964 +0x44
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00)
        vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x434
net/http.HandlerFunc.ServeHTTP(0xc000609410, 0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00)
        GOROOT/src/net/http/server.go:1964 +0x44
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00)
        vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1eeb
net/http.HandlerFunc.ServeHTTP(0xc0003deac0, 0x7f2b1ceea2c0, 0xc00000c320, 0xc000467c00)
        GOROOT/src/net/http/server.go:1964 +0x44
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f2b1ceea2c0, 0xc00000c320, 0xc000467b00)
        vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x476
net/http.HandlerFunc.ServeHTTP(0xc0004c70e0, 0x7f2b1ceea2c0, 0xc00000c320, 0xc000467b00)
        GOROOT/src/net/http/server.go:1964 +0x44
github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0004a2360, 0xc0004c1ce0, 0x196a2a0, 0xc00000c320, 0xc000467b00)
        vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:108 +0xb3
created by github.com/jetstack/cert-manager/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
        vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:97 +0x1b0

logging error output: "Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1?timeout=32s\": the server could not find the requested resource\n"
 [k3s/v1.13.5 (linux/amd64) kubernetes/256ea73/controller-discovery 10.42.0.1:58750]

@giovannicandido
Copy link

Update: Issuer works because I removed the webhook validation from the namespace. ClusterIssuer keeps validating.
I did try to disable in helm installation and it does not disable the thing :-). Will check that later.

@m4rcu5
Copy link

m4rcu5 commented Apr 6, 2019

I did try to disable in helm installation and it does not disable the thing :-). Will check that later.

I'd like to chime in here.

I also try to install the cert-manager without web-hook and that should be possible by passing --set webhook.enabled=false to the helm installer.

So I took @giovannicandido snipped as base (which he posted right around the same time I figured out what the Helmchart variables were from the source code. some docs around that feature would be nice 😄 )

apiVersion: k3s.cattle.io/v1
kind: HelmChart
metadata:
  name: cert-manager
  namespace: kube-system
spec:
  chart: cert-manager
  version: v0.7.0
  targetNamespace: cert-manager
  repo: https://charts.jetstack.io

And to disable the webhook, I added:

set:
  webhook.enabled: false

This results in nothing at all. The Helmchart does not even seem to be processed, but no error is thrown as well.

Next up I tried:

set:
  webhook.enabled: "false"

This was interperated by k3s as a string and resulted in the following log lines in the helm-install pod:

+ helm install --name cert-manager cert-manager --namespace cert-manager --repo https://charts.jetstack.io --version v0.7.0 --set-string webhook.enabled=false
2019/04/06 14:51:57 Warning: Condition path 'webhook.enabled' for chart webhook returned non-bool value

I have also tried with 0 as argument, but the same warning applies.

Is there any way to pass a boolean value to the Helm installer process using the k3s.cattle.io/v1 API?

@codingric
Copy link

codingric commented Apr 16, 2019

I am also trying to get cert-manager to run on k3s, after following the instructions on https://docs.cert-manager.io/en/latest/getting-started/install.html#installing-with-helm I was able to:

  1. Apply CRDs:
    kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/cert-manager.yaml
  2. install cert-manager via helm (disabling webhooks):
    helm install --name cert-manager --namespace cert-manager --version v0.7.0 jetstack/cert-manager --set webhook.enabled=false
  3. Apply Issuer (ClusterIssuer always fails)

After updating my Ingress to use the newly configured Issuer I can see the following error in the logs:
ingress-shim controller: Re-queuing item "<namespace>/<ingress-name>" due to error processing: Internal error occurred: failed calling webhook "certificates.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1/certificates\": the server could not find the requested resource") has prevented the request from succeeding

@jaigouk
Copy link

jaigouk commented Apr 17, 2019

Hello,

I have been trying to install cert-manager and monitoring this thread and this one. I also referenced this issue in jetstack/cert-manager repo.

I have 4 tinker boards and I updated k3s to v0.4.0 yesterday.

I put following yaml as /var/lib/rancher/k3s/server/manifests/cert-manager.yml. I used sudo su to access that dir. After that, I rebooted my tinkerboard to make it sure that k3s picks up the file.

apiVersion: k3s.cattle.io/v1
kind: HelmChart
metadata:
  name: cert-manager
  namespace: kube-system
spec:
  chart: stable/cert-manager
  valuesContent: |-
    image:
      repository: quay.io/jetstack/cert-manager-controller-arm
      tag: v0.7.0
      pullPolicy: IfNotPresent
    webhook:
      enabled: false

Here is the result from helm-install-cert-manager job in kube-system namespace

NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1/ServiceAccount
NAME          SECRETS  AGE
cert-manager  1        1s
==> v1beta1/ClusterRole
NAME          AGE
cert-manager  1s
==> v1/ClusterRole
NAME               AGE
cert-manager-view  1s
cert-manager-edit  1s
==> v1beta1/ClusterRoleBinding
NAME          AGE
cert-manager  1s
==> v1beta1/Deployment
NAME          DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
cert-manager  1        0        0           0          0s
==> v1/Pod(related)
NAME                           READY  STATUS   RESTARTS  AGE
cert-manager-666775646b-wm28f  0/1    Pending  0         0s
NOTES:
cert-manager has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.readthedocs.io/en/latest/reference/issuers.html
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html
**This Helm chart is deprecated**.
All future changes to the cert-manager Helm chart should be made in the
official repository: https://github.com/jetstack/cert-manager/tree/master/deploy.
The latest version of the chart can be found on the Helm Hub: https://hub.helm.sh/charts/jetstack/cert-manager.
+ exit

And the deployment status says that it is healthy.

Screenshot 2019-04-17 at 23 12 47

@lentzi90
Copy link

I cannot reproduce the webhook issue. Is it only a problem if you install the chart the "k3s way"? (By putting it in /var/lib/rancher/k3s/server/manifests/cert-manager.yml?)

This is how I install it with webhooks.

# Install helm first by downloading the binary from their release page: https://github.com/helm/helm/releases

# Create service account and RBAC resources for tiller
kubectl apply -f - <<EOF                                                                                                                                                                                                                                   
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system                                                                                            
EOF

# Initialize tiller
KUBECONFIG=/etc/rancher/k3s/k3s.yaml
helm init --service-account tiller

# The following commands are directly from the cert-manager installation guide for helm
# https://docs.cert-manager.io/en/latest/getting-started/install.html#steps

# Install the CustomResourceDefinition resources separately
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
kubectl create namespace cert-manager

# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install \
  --name cert-manager \
  --namespace cert-manager \
  --version v0.7.0 \
  jetstack/cert-manager

# Wait for the pods to be ready here...
kubectl get pods --namespace cert-manager

# Create certificates, Issuer and ClusterIssuer to test deployment
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: cert-manager-test
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: test-selfsigned
  namespace: cert-manager-test
spec:
  selfSigned: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: selfsigned-cert
  namespace: cert-manager-test
spec:
  commonName: example.com
  secretName: selfsigned-cert-tls
  issuerRef:
    name: test-selfsigned
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: test-selfsigned-cluster
spec:
  selfSigned: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: selfsigned-cert-cluster
  namespace: cert-manager-test
spec:
  commonName: example.com
  secretName: selfsigned-cert-tls-cluster
  issuerRef:
    name: test-selfsigned-cluster
    kind: ClusterIssuer
EOF

# Check that certs are issued
kubectl describe certificate -n cert-manager-test

@codingric
Copy link

codingric commented Apr 24, 2019

Hi @lentzi90,

I followed your same steps. Cert-manager installs correctly, but never issues a certificate.

kubectl apply -f - <<EOF
> apiVersion: v1
> kind: Namespace
> metadata:
>   name: cert-manager-test
> ---
> apiVersion: certmanager.k8s.io/v1alpha1
> kind: Issuer
> metadata:
>   name: test-selfsigned
>   namespace: cert-manager-test
> spec:
>   selfSigned: {}
> ---
> apiVersion: certmanager.k8s.io/v1alpha1
> kind: Certificate
> metadata:
>   name: selfsigned-cert
>   namespace: cert-manager-test
> spec:
>   commonName: example.com
>   secretName: selfsigned-cert-tls
>   issuerRef:
>     name: test-selfsigned
> ---
> apiVersion: certmanager.k8s.io/v1alpha1
> kind: ClusterIssuer
> metadata:
>   name: test-selfsigned-cluster
> spec:
>   selfSigned: {}
> ---
> apiVersion: certmanager.k8s.io/v1alpha1
> kind: Certificate
> metadata:
>   name: selfsigned-cert-cluster
>   namespace: cert-manager-test
> spec:
>   commonName: example.com
>   secretName: selfsigned-cert-tls-cluster
>   issuerRef:
>     name: test-selfsigned-cluster
>     kind: ClusterIssuer
> EOF
namespace/cert-manager-test created
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "issuers.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1/issuers\": the server could not find the requested resource") has prevented the request from succeeding
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "certificates.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1/certificates\": the server could not find the requested resource") has prevented the request from succeeding
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "clusterissuers.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1/clusterissuers\": the server could not find the requested resource") has prevented the request from succeeding
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "certificates.admission.certmanager.k8s.io": an error on the server ("Internal Server Error: \"/apis/admission.certmanager.k8s.io/v1beta1/certificates\": the server could not find the requested resource") has prevented the request from succeeding

@lentzi90
Copy link

I was able to reproduce the same error you got @thedirtymexican but only when running all the commands at once with a script. To fix it, I simply rerun the last part (after create namespace). Then it worked.

I was also able to reproduce this problem in minikube, so it doesn't seem to be related to k3s.

This seem to be a timing issue to me. If the Issuer is created too fast it fails. It is not enough to wait for the pods to become ready either, I tried this and still got the error.
It is curious though that the issuers that the helm chart installs always works...

I also managed to get another error while testing, where it succeeds with the first cert and fails with the second:

namespace/cert-manager-test created
issuer.certmanager.k8s.io/test-selfsigned created
certificate.certmanager.k8s.io/selfsigned-cert created
clusterissuer.certmanager.k8s.io/test-selfsigned-cluster created
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "certificates.admission.certmanager.k8s.io": 0-length response with status code: 200 and content type: text/plain; charset=utf-8

To help with debugging I created a Vagrantfile and a script for quickly installing cert-manager, issuer and certificate.
See below.
(Github wouldn't allow me to upload without changing to .txt, please remove the suffix before using.)
Vagrantfile.txt
cert-manager-test.sh.txt

@dewet22
Copy link

dewet22 commented May 14, 2019

I've not managed to repro this myself on a brand new k3s cluster; it just seems to work out of the box if you follow their instructions, most importantly step 1! My steps were:

  1. Install the CRDs first via kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml

  2. Install the HelmChart using the jetstack repo without any modifications:

apiVersion: k3s.cattle.io/v1
kind: HelmChart
metadata:
  namespace: kube-system
  name: cert-manager
spec:
  chart: cert-manager
  repo: https://charts.jetstack.io
  targetNamespace: cert-manager
  1. Check everything comes up as expected:
$ kubectl -n cert-manager get all
NAME                                           READY   STATUS    RESTARTS   AGE
pod/cert-manager-77844c9b4d-mggsd              1/1     Running   0          48m
pod/cert-manager-cainjector-78bbcdc47c-m9cwh   1/1     Running   0          48m
pod/cert-manager-webhook-79d48667bd-vt4pw      1/1     Running   0          48m

NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/cert-manager-webhook   ClusterIP   10.43.15.15   <none>        443/TCP   48m

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cert-manager              1/1     1            1           48m
deployment.apps/cert-manager-cainjector   1/1     1            1           48m
deployment.apps/cert-manager-webhook      1/1     1            1           48m

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-77844c9b4d              1         1         1       48m
replicaset.apps/cert-manager-cainjector-78bbcdc47c   1         1         1       48m
replicaset.apps/cert-manager-webhook-79d48667bd      1         1         1       48m
  1. Create my ClusterIssuer and relevant secrets to talk to the cloud providers for DNS01 verification, and verify the ACME registration worked. Describing the object should have a summary like:
$ kubectl describe clusterissuer letsencrypt-staging
...
Status:
  Acme:
    Uri:  https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxx
  Conditions:
    Last Transition Time:  2019-05-14T16:04:08Z
    Message:               The ACME account was registered with the ACME server
    Reason:                ACMEAccountRegistered
    Status:                True
    Type:                  Ready
  1. Create a Certificate referencing the previous ClusterIssuer and DNS provider, and wait for it to complete (it took 8m in my case. You can watch the cert-manager logs using -f if you want to see what the hold-up is; generally it spends its time waiting for DNS propagation):
$ kubectl describe certificate xxx
...
Status:
  Conditions:
    Last Transition Time:  2019-05-14T16:23:56Z
    Message:               Certificate is up to date and has not expired
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Not After:               2019-08-12T15:23:55Z
Events:
  Type     Reason              Age   From          Message
  ----     ------              ----  ----          -------
  Normal   Generated           23m   cert-manager  Generated new private key
  Normal   GenerateSelfSigned  23m   cert-manager  Generated temporary self signed certificate
  Normal   OrderCreated        23m   cert-manager  Created Order resource "xxx-3383905340"
  Warning  CreateError         23m   cert-manager  Failed to create Order resource: orders.certmanager.k8s.io "xxx-3383905340" already exists
  Normal   OrderComplete       15m   cert-manager  Order "xxx-3383905340" completed successfully
  Normal   CertIssued          15m   cert-manager  Certificate issued successfully

@lpil
Copy link

lpil commented May 27, 2019

For a single node cluster it would seem sufficient to enable the built in letsencrypt integration that Traefik has and saving the cert in a persistent volume. This way no additional components are required.

Could this be something the install script optionally does? It looks like the helm chart used supports this.

@ljani
Copy link

ljani commented May 27, 2019

I'm using Traefik for ssl termination on my single node cluster and I indeed found it much simpler than cert-manager, because the current cert-manager helm chart requires you to create some CRDs manually before deployment. Here are some quick notes on using Traefik:

  • Pass --no-deploy traefik to k3s
  • Here's my helm umbrella chart configuration (values.yaml):
traefik:
    rbac:
        enabled: true
    dashboard:
        enabled: true
        domain: "traefik.example.com"
    ssl:
        enabled: true
    acme:
        logging: true
        enabled: true
        email: "ljani@example.com"
        challengeType: dns-01
        staging: true
        dnsProvider:
            name: duckdns
            duckdns:
                DUCKDNS_TOKEN: 123
        domains:
            enabled: true
            domainsList:
            - main: "traefik.example.com"
            - sans:
              - "otherthing.example.com"
        persistence:
            enabled: true
            storageClass: my-traefik-acme

otherthing:
    ingress:
        annotations:
            kubernetes.io/ingress.class: traefik
        hosts:
            - name: otherthing.example.com
  • Define a PersistentVolume for my-traefik-acme or have a default provisioner. local-path-provisioner looks promising, but sadly there are no ARM images for it. So, I'm using PersistentVolume with a hostPath at the moment.

@padiazg
Copy link

padiazg commented May 28, 2019

I'm a newbie with Kubernetes. @ljani can you please give a step by step, or more detailed, example of using traefik with let's encrypt on k3s? please

@ljani
Copy link

ljani commented May 29, 2019

@padiazg What else information do you need in addition to the steps above? Here's how to define that PersistentVolume.

@alexellis
Copy link

+1 @padiazg

@pascalw
Copy link

pascalw commented Jul 2, 2019

I too struggled with this as a k3s newbie. I finally managed to get it going and of course in hindsight it's pretty simple :-)

I blogged about the setup here: https://pascalw.me/blog/2019/07/02/k3s-https-letsencrypt.html.
It's using cert-manager with the built-in Traefik. Hope it's useful to someone!

@mgoltzsche
Copy link

mgoltzsche commented Jul 22, 2019

But still: the following allows me to install cert-manager on k8s but fails on k3s (docker-compose, v0.7.0):

$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.9.1/cert-manager.yaml
# ... success ...
$ kubectl wait --for condition=available --timeout=2m apiservice/v1beta1.admission.certmanager.k8s.io
error: timed out waiting for the condition on apiservices/v1beta1.admission.certmanager.k8s.io

No matter how long I wait the APIService never becomes available although the deployments and the service are available. The APIService cannot reach the cert-manager-webhook service for some reason.

UPDATE:
I cannot find any error in the logs but this one:

server_1  | E0723 20:54:04.104870       1 available_controller.go:353] v1beta1.admission.certmanager.k8s.io failed with: Get https://10.43.211.246:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
server_1  | W0723 20:54:07.386207       1 garbagecollector.go:644] failed to discover some groups: map[admission.certmanager.k8s.io/v1beta1:the server is currently unable to handle the request]

There is a corresponding cert-manager issue.

However installing cert-manager without webhook works:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.9.1/cert-manager-no-webhook.yaml

Alternatively a working installation using kustomize with a helm plugin can be found here.

@sandys
Copy link

sandys commented Oct 1, 2019

hi guys,
I'm both a beginner to k3s and to k8s.
I have multiple subdomains (part of a wildcard tld) as well as multiple TLD coming in to my cluster.

For compliance reasons, i have to use certificates issued by a CA - both wildcard and single domain. I cannot use letsencrypt.

What is the k3s configuration i need for the inbuilt traefik to pick these certificates and then route the domains to the right pods ?

Another recommendation i have heard is to not terminate https at the traefik ingress and instead run a second nginx/haproxy inside which actually does the termination. Not sure if this is ideal, but if no other way is possible.. ill go this way. The SSL stuff is more important than performance.

Not sure if i should file a fresh bug...but i have been stuck on this for a long time and am just not able to figure it out. Any help would be much appreciated.

@j0holo
Copy link

j0holo commented Dec 23, 2019

Hi @sandys

k3s has a secret that is used by treafik named: traefik-default-cert in the kube-system namespace. If you base64 encode your wildcard cert (*.example.com) and key you can install a new TLS cert. For example cat my-cert.crt | tr -d '\n' | base64 to format it to base64.

After you updated the secret you need to delete the traefik pod. After that it will work.

For routing you need an ingress controller, this is an example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  creationTimestamp: "2019-11-30T11:25:24Z"
  generation: 3
  name: gin-website
  namespace: default
  resourceVersion: "1760102"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/gin-website
  uid: c44f04fc-f0a0-4a2d-b961-d807af9e6218
spec:
  rules:
  - host: test.dest.lan
    http:
      paths:
      - backend:
          serviceName: gin-website
          servicePort: 8080 <- this is the port number of my gin-website service
status:
  loadBalancer:
    ingress:
    - ip: <ip address of master node>

Hope gives you a helping hand into the right direction. If you need more help, just ask and I'll try to answer your question.

@kraihn
Copy link

kraihn commented Feb 9, 2020

Using the information in issue #276, I was able to get ACME certificates working without the above mentioned cert-manager. I modified the manifests/traefik.yaml to utilize valuesContent instead of set. After that, I was able to specify acme.resolvers so the dns-01 challenge would pass. I plan to reconfigure k3s with --no-deploy traefik to persist my changes. This setup works for my home lab without public web exposure. I haven't tested reducing the delayBeforeCheck value yet.

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: traefik
  namespace: kube-system
spec:
  chart: stable/traefik
  valuesContent: |-
    imageTag: "1.7.20"
    rbac:
      enabled: "true"
    ssl:
      enabled: "true"
    metrics:
      prometheus:
        enabled: "true"
    kubernetes:
      ingressEndpoint:
        useDefaultPublishedService: "true"
    dashboard:
      enabled: "true"    
      domain: "<removed>"
    acme:
      enabled: "true"
      challengeType: "dns-01"
      delayBeforeCheck: "30"
      email: "<removed>"
      logging: "true"
      persistence:
        enabled: "true"
      staging: "false"
      dnsProvider:
        name: "cloudflare"
        cloudflare:
          CLOUDFLARE_EMAIL: "<removed>"
          CLOUDFLARE_API_KEY: "<removed>"
      resolvers:
        - "1.1.1.1:53"
        - "8.8.8.8:53"

@davidnuzik davidnuzik added this to the Backlog milestone Mar 2, 2020
@davidnuzik
Copy link
Contributor

We're moving away from Traefik and will be using Nginx as our default ingress controller moving forward (our upcoming v1.17.4 release should use Nginx by default for new installs). See #817 for details but if you have any concerns/complaints feel free to list them here in this issue. You can still use Traefik if you want, it just won't be the default in new installs.

I would like to close this issue soon since Traefik will no longer be the default ingress controller going forward.

@lpil
Copy link

lpil commented Mar 2, 2020

Thanks for the update! Will there be preconfigured letsencrypt support with the new ingress setup?

@davidnuzik
Copy link
Contributor

@lpil I don't know if I can easily provide an answer. I know Traefik supports LE by default but I do not know much about Nginx and LE. If they support it then there's no reason I can see we would not as well.

@lpil
Copy link

lpil commented Mar 3, 2020

Typically one would include certbot or similar which would handle provisioning the cert for nginx. Are there plans to include it with the new nginx ingress?

@mickkael
Copy link

Traefik is still here for the next version 1.17.4
#817

#817 (comment)

Going to keep this open for now, but our ultimately solution likely will not be "drop Traefik and go to Nginx." It'll probably involve supporting more options.

@kidproquo
Copy link

Since traefik comes installed with k3s by default, why do we need cert-manager? Can't we just use traefik for Lets Encrypt and not install cert-manager?

@bemanuel
Copy link

bemanuel commented Oct 7, 2020

Since traefik comes installed with k3s by default, why do we need cert-manager? Can't we just use traefik for Lets Encrypt and not install cert-manager?

But how? I'm on a fight with k3s, rancher to deploy my services. By the way, the Rancher Admin uses cert-manager

@bemanuel
Copy link

bemanuel commented Oct 7, 2020

Since traefik comes installed with k3s by default, why do we need cert-manager? Can't we just use traefik for Lets Encrypt and not install cert-manager?

Like this? https://community.hetzner.com/tutorials/howto-k8s-traefik-certmanager

@mgoltzsche
Copy link

mgoltzsche commented Oct 7, 2020

@kidproquo using cert-manager you can decouple your applications from the Ingress controller implementation (traefik in case of k3s) allowing them to work with other Ingress controllers as well. Also traefik can only manage certificates for your Ingresses but there are other use cases where you need certificates (e.g. for a kubernetes APIService or webhook) which cert-manager can manage for you as well.

@AndrewSav
Copy link

If you have several instances of traefik in your cluster and you are not using traefik EE (the paid version) you have to use cert-manager if you want LE cert management. For single traefik instance per cluster, traefik without the cert manager may be adequate in managing the LE certs.

@AndrewSav
Copy link

And this is just to report that I was able to successfully configure Cert Manager and it seems to work fine with Traefik2 which comes with v1.21.0+k3s1. Nothing special were required, just followed the documentation.

@breuerfelix
Copy link

breuerfelix commented Dec 4, 2021

somehow cert-manager 1.5.4 doesn't work for me but 1.5.3 still works like a charm on a fresh k3s setup with traefik.
cert-manager/cert-manager@v1.5.3...v1.5.4 this is what changed, anyone know what could cause the issue? debugging for a whole day now and this might be something about ingress classes?
it cannot solve the acme challenge, the pod with the challenge never receives any data

/edit
the problem is the following: in the old version cert-manager put in the ingress (from the challenge pod) an annotation with "class.name = traefik" and the new one puts it in the ingress.spec.ingressclassname = traefik and not in the annotations anymore. so far so good. but i have no damn idea how to solve that :D

/edit2
fixed it! finally, found the answer here cert-manager/cert-manager#2517

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    email: mailuser@mailserver.com
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: le-staging-issuer-account-key
    solvers:
      - http01:
          ingress:
            ingressTemplate:
              metadata:
                annotations:
                  kubernetes.io/ingress.class: traefik

@domvo
Copy link

domvo commented Dec 16, 2021

The problem seems to be that older versions of cert-manager "translated" the ingress.class=traefik to an Ingress resource with an annotation as seen in #117 (comment). Newer versions translate it to an Ingress resource with spec.ingressClassName=traefik.

Out of the box, traefik only understands the annotation and not newer version. One workaround would be the comment above mine.

Another workaround is to apply another small resource to your cluster:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: traefik
spec:
  controller: traefik.io/ingress-controller

Simply apply this with kubectl and then the "old" ClusterIssuer version should work without any problem:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    email: mailuser@mailserver.com
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: le-staging-issuer-account-key
    solvers:
      - http01:
          ingress:
            class: traefik # this has to match the IngressClass metadata.name above.

@rehiy
Copy link

rehiy commented Apr 14, 2022

traefik 自动签发证书及可视化面板 https://www.rehiy.com/post/392

@dereknola
Copy link
Member

Closing this issue as Stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests