Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot delete keypair secrets #6482

Closed
Overbryd opened this issue Feb 19, 2019 · 8 comments · Fixed by #8945
Closed

Cannot delete keypair secrets #6482

Overbryd opened this issue Feb 19, 2019 · 8 comments · Fixed by #8945
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Overbryd
Copy link

1. What kops version are you running? The command kops version, will display
this information.

$ kops version
Version 1.11.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Irrelevant.

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

$ kops delete secret keypair kube-controller-manager
I0219 15:22:22.716650   15341 certificate.go:106] Ignoring unexpected PEM block: "RSA PRIVATE KEY"

error deleting secret: error deleting certificate: error loading certificate "s3://<redacted>/<redacted>/pki/private/kube-controller-manager/<redacted>.key": could not parse certificate

5. What happened after the commands executed?

They failed.

6. What did you expect to happen?

I expect them to remove the kube-controller-manager keypair, according to your documentation https://github.com/kubernetes/kops/blob/master/docs/rotate-secrets.md

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

Irrelevant to this issue.

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

$ kops delete secret keypair kube-controller-manager -v10
I0219 15:23:35.129669   15348 factory.go:68] state store s3://<redacted>/<redacted>
I0219 15:23:35.409810   15348 s3context.go:194] found bucket in region "eu-central-1"
I0219 15:23:35.409867   15348 s3fs.go:220] Reading file "s3://<redacted>/<redacted>/config"
I0219 15:23:36.054560   15348 s3fs.go:257] Listing objects in S3 bucket "<redacted>" with prefix "<redacted>/pki/private/kube-controller-manager/"
I0219 15:23:36.095834   15348 s3fs.go:285] Listed files in s3://<redacted>/<redacted>/pki/private/kube-controller-manager: [s3://<redacted>/<redacted>/pki/private/kube-controller-manager/<redacted>.key s3://<redacted>/<redacted>/pki/private/kube-controller-manager/keyset.yaml]
I0219 15:23:36.096162   15348 s3fs.go:220] Reading file "s3://<redacted>/<redacted>/pki/private/kube-controller-manager/<redacted>.key"
I0219 15:23:36.170662   15348 certificate.go:106] Ignoring unexpected PEM block: "RSA PRIVATE KEY"

error deleting secret: error deleting certificate: error loading certificate "s3://<redacted>/<redacted>/pki/private/kube-controller-manager/<redacted>.key": could not parse certificate

9. Anything else do we need to know?

Please don't let your bots close this issue and take it seriously.
This was already reported in #5318

@Sluggerman
Copy link

same issue, looks like a bug, any updates would be appreciated

@philwhln
Copy link

We just hit this too.

$ kops version
Version 1.11.1 (git-0f2aa8d30)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kuzaxak
Copy link

kuzaxak commented Dec 14, 2019

Have the same issue in kops 1.15.0

@Deepak1100
Copy link
Contributor

I also have the same issue in kops 1.16.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants