-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The pod of the CVO cannot restart via the daemonset #620
Comments
So the kube-controller-manager running on Also open an issue on https://github.com/openshift/cluster-kube-controller-manager-operator if the pods don't come back |
This is from last Wednesday: $ git show --format='%h %aD %s' 0baec58
0baec58 Wed, 31 Oct 2018 15:35:09 -0700 Merge pull request #571 from sttts/sttts-remove-kube-core-secrets 31 pull requests have landed since, and things like #551 and #624 are important to keep up with the evolving release image. Can you try again with a fresh build from master? |
@wking Thanks! I will have a try! |
I tried it with the latest version but got below errors: [jzhang@dhcp-140-18 installer]$ openshift-install version
openshift-install v0.3.0-155-g25ceecc296bd020219967ae1258180df01acff0f
Terraform v0.11.8
Your version of Terraform is out of date! The latest version
is 0.11.10. You can update by downloading from www.terraform.io/downloads.html
[jzhang@dhcp-140-18 installer]$ openshift-install create cluster --dir 1107
? Image https://releases-rhcos.svc.ci.openshift.org/storage/releases/maipo/47.77/redhat-coreos-maipo-47.77-qemu.qcow2
INFO Fetching OS image...
INFO Using Terraform to create cluster...
INFO Waiting for bootstrap completion...
INFO API v1.11.0+d4cacc0 up
WARNING RetryWatcher - getting event failed! Re-creating the watcher. Last RV: 1827
INFO Destroying the bootstrap resources...
INFO Using Terraform to destroy bootstrap resources...
[jzhang@dhcp-140-18 installer]$ sudo virsh list
setlocale: No such file or directory
Id Name State
----------------------------------------------------
30 master0 running
[jzhang@dhcp-140-18 installer]$ openshift-install destroy cluster --dir 1107
FATAL Error executing openshift-install: Failed while preparing to destroy cluster: no destroyers registered for "libvirt" |
To get the libvirt destroyer you need to build with |
Yes, it did, thanks! Sorry for the late to reply. |
@jianzhangbjz, is everything working for you, then? Can you close if so? |
@wking Actually, no. We still suffering the OCP 4.0 crash. But, it's nothing matter with this issue. Close it. |
Version
Platform (aws|libvirt|openshift):
libvirt
What happened?
The CVO's pod cannot be restart.
What you expected to happen?
The pod of the CVO can restart successfully. And, how can I make it work?
How to reproduce it (as minimally and precisely as possible)?
Not sure, I just delete the pod.
$oc delete pods --all -n openshift-cluster-version
Anything else we need to know?
References
The text was updated successfully, but these errors were encountered: