Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebHook error #4425

Closed
PrashantGSIN opened this issue Sep 2, 2021 · 15 comments
Closed

WebHook error #4425

PrashantGSIN opened this issue Sep 2, 2021 · 15 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@PrashantGSIN
Copy link

the cert-manager-vault-token created after that getting the below error.

Describe the bug:

Expected behaviour:

Should have fetched the certificates

Steps to reproduce the bug:

Restarted the Vault and microk8s.

Anything else we need to know?:

Environment details::

  • Kubernetes version: 1.21/stable
  • Cloud-provider/provisioner: testing over local Ubuntu
  • cert-manager version: v1.5.3
  • Install method: e.g. helm/static manifests: static

/kind bug

@jetstack-bot jetstack-bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 2, 2021
@irbekrm
Copy link
Collaborator

irbekrm commented Sep 2, 2021

Please describe your installation config, what actually happened, what resources got created and add any logs with error messages etc.

@PrashantGSIN
Copy link
Author

Please describe your installation config, what actually happened, what resources got created and add any logs with error messages etc.

@irbekrm,

I am actually testing K8s cert-manager vault configuration for automation of certificates.

I am able to install/start microk8s, able to run the pods, able to generate the token but my issuer is throwing this error.

Screenshot from 2021-09-02 15-23-13

I have attached the yml file in my first comment.

Let me know if you need any more of the details.

@PrashantGSIN
Copy link
Author

Hi any other thoughts.

@irbekrm
Copy link
Collaborator

irbekrm commented Sep 3, 2021

It looks like Kubernetes API server is not able to reach cert-manager webhook.

Have you checked that the webhook deployment is up and healthy? Are there any error logs etc?

It may also be that your cluster networking setup does not allow Kubernetes API server to reach a cluster service in which case you might need to run the webhook on host network https://cert-manager.io/docs/installation/compatibility/#aws-eks

@PrashantGSIN
Copy link
Author

It looks like Kubernetes API server is not able to reach cert-manager webhook.

Have you checked that the webhook deployment is up and healthy? Are there any error logs etc?

It may also be that your cluster networking setup does not allow Kubernetes API server to reach a cluster service in which case you might need to run the webhook on host network https://cert-manager.io/docs/installation/compatibility/#aws-eks

The issue was once resolved by removing microk8s with "sudo snap remove microk8s" but today I was testing from the srcatch again got the same error. Although WebHook pod is up and running without any error. I am testing it over Amazon Ubuntu. Can't we set it up over Amazon Ubuntu ?

@irbekrm
Copy link
Collaborator

irbekrm commented Sep 14, 2021

It shouldn't be related to Amazon Ubuntu. It is likely related to what your cluster networking setup is. You should verify that Kubernetes API server is able to reach the webhook service.

@PrashantGSIN
Copy link
Author

It shouldn't be related to Amazon Ubuntu. It is likely related to what your cluster networking setup is. You should verify that Kubernetes API server is able to reach the webhook service.

Yes, I am using default VPC over AWS cloud with required ports enabled. How I can check if the k8s API server is able to reach webhook service ? Please help.

@irbekrm
Copy link
Collaborator

irbekrm commented Sep 14, 2021

I don't really understand the setup - is this a self-hosted Kubernetes or are you using something like EKS?

@PrashantGSIN
Copy link
Author

I don't really understand the setup - is this a self-hosted Kubernetes or are you using something like EKS?

I have taken Ubuntu machine from AWS EC2 console and there I am testing it. So I guess it is self-hosted k8s.
Let me know if you need any further information.

@PrashantGSIN
Copy link
Author

I don't really understand the setup - is this a self-hosted Kubernetes or are you using something like EKS?

I have taken Ubuntu machine from AWS EC2 console and there I am testing it. So I guess it is self-hosted k8s.
Let me know if you need any further information.

I am testing microk8s on local ubuntu20.04 and still getting the same error. Please help.

@maelvls
Copy link
Member

maelvls commented Sep 15, 2021

Note: if you like, you can join the #cert-manager channel on the Kubernetes Slack? You can join with this link.

@jetstack-bot
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle stale

@jetstack-bot jetstack-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@jetstack-bot
Copy link
Collaborator

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to jetstack.
/lifecycle rotten
/remove-lifecycle stale

@jetstack-bot jetstack-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 13, 2022
@jetstack-bot
Copy link
Collaborator

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

@jetstack-bot
Copy link
Collaborator

@jetstack-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to jetstack.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants