We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
THe cluster can not be stabilized due to MCO
level=debug msg=Cluster Operator machine-config is Progressing=True LastTransitionTime=2024-05-13 08:44:46 +0000 UTC DurationSinceTransition=91620s Reason= Message=Working towards 4.16.0-rc.1 level=error msg=Error checking cluster operator Progressing status: "context deadline exceeded" level=debug msg=Cluster Operator machine-config is Progressing=True LastTransitionTime=2024-05-13 08:44:46 +0000 UTC DurationSinceTransition=91621s Reason= Message=Working towards 4.16.0-rc.1 level=debug msg=These cluster operators were stable: [authentication, config-operator, console, control-plane-machine-set, dns, etcd, image-registry, ingress, kube-apiserver, kube-controller-manager, kube-scheduler, kube-storage-version-migrator, machine-api, machine-approver, marketplace, network, openshift-apiserver, openshift-controller-manager, openshift-samples, operator-lifecycle-manager, operator-lifecycle-manager-catalog, operator-lifecycle-manager-packageserver, service-ca] level=error msg=These cluster operators were not stable: [machine-config]
The text was updated successfully, but these errors were encountered:
Looks like node is experiencing the disk pressure which cause some operator to not come up successfully
[core@crc ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 6.9G 84K 6.9G 1% /dev/shm tmpfs 2.8G 89M 2.7G 4% /run /dev/vda4 31G 30G 1.4G 96% /sysroot tmpfs 6.9G 12K 6.9G 1% /tmp /dev/vda3 350M 111M 217M 34% /boot tmpfs 1.4G 0 1.4G 0% /run/user/1000
Will need to dig bit more to figure out what causing it.
Sorry, something went wrong.
Disk pressure only observed during first run after that I did 2-3 more run and didn't produce it. Once it is tested by @adrianriobo we can close it.
This seems randomly happen, @praveenkumar was able to build it internally and I tried again it is working as expected.
praveenkumar
No branches or pull requests
THe cluster can not be stabilized due to MCO
The text was updated successfully, but these errors were encountered: