Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syncing secret across namespaces gives error "unable to fetch certificate that owns the secret" #4210

Closed
aesa-dr opened this issue Jul 14, 2021 · 12 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@aesa-dr
Copy link

aesa-dr commented Jul 14, 2021

Describe the bug:
When syncing (using kubed) wildcard cert secret across namespaces we get an error in cert-manager-cainjector pod from all namespaces where the secret is synced.

The wildcard certificate still works though.

error we get:

cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io \"private-certificate-cluster-domain\" not found" "certificate"={"Namespace":"kube-public","Name":"private-certificate-cluster-domain"} "secret"={"Namespace":"kube-public","Name":"private-tls-cluster-domain"}

Expected behaviour:

I get no errors in cert-manager-cainjector logs.
Steps to reproduce the bug:

We use a bash script to install cert-manager, secrets and certifications - therefore you will see variables haven't been filled out.

helm install ${CHART_NAME} ${CHART_REPO} \
    --namespace=${NAMESPACE} \
    --version ${CHART_VERSION} \
    --create-namespace \
    --wait \
    --set resources.limits.cpu=500m \
    --set resources.limits.memory=256Mi \
    --set resources.requests.cpu=100m \
    --set resources.requests.memory=256Mi \
    --set installCRDs=true 
apiVersion: v1
data:
  tls.crt: ""
  tls.key: ""
kind: Secret
metadata:
  annotations:
    kubed.appscode.com/sync: ""
  name: _name_
type: kubernetes.io/tls
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: _name_
  annotations:
    kubed.appscode.com/sync: ""
spec:
  secretName: _secretName_
  issuerRef:
    name: letsencrypt
    kind: ClusterIssuer
    group: cert-manager.io
  commonName: '*._dns_'
  dnsNames:
  - _dns_
  - '*._dns_'

Anything else we need to know?:

Environment details::

  • Kubernetes version: v1.18.20-gke.501
  • Cloud-provider/provisioner: Google
  • cert-manager version: v1.14
  • Install method: e.g. helm/static manifests: helm

/kind bug

@jetstack-bot jetstack-bot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 14, 2021
@jetstack-bot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale

@jetstack-bot jetstack-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 12, 2021
@nick-oconnor
Copy link

/remove-lifecycle stale

@jetstack-bot jetstack-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 19, 2021
@dicolasi
Copy link

I have exactly the same problem. Did you find a solution @aesa-dr ?

@thatsmydoing
Copy link

thatsmydoing commented Jan 7, 2022

I believe this is caused by the copied secrets having cert-manager annotations. It looks like kubed can strip those https://appscode.com/products/kubed/v0.12.0/guides/config-syncer/intra-cluster/#remove-annotation

The error message also shows up for secrets whose certificates have been deleted it seems.

@nick-oconnor
Copy link

nick-oconnor commented Jan 11, 2022

@thatsmydoing Thanks for the link, however I think that section of the doc is referring to that fact that manually removing the kubed.appscode.com/sync annotation removes the copies automatically, not that it's applying a mutation to the copies (which is what we need). I would not be surprised if applying mutations to synced resources is outside the scope of kubed.

@thatsmydoing
Copy link

Oh, you're right. Sorry about that. There is indeed an open issue asking for it https://github.com/kubeops/config-syncer/issues/465

@jetstack-bot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale

@jetstack-bot jetstack-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 11, 2022
@jetstack-bot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale

@jetstack-bot jetstack-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 11, 2022
@jetstack-bot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

@jetstack-bot
Copy link
Collaborator

@jetstack-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jetstack-bot
Copy link
Collaborator

@landorg: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@landorg
Copy link

landorg commented Jul 25, 2023

Anyone ever managed to fix this?
Is this a problem or just an error message that does no harm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants