Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error building bundle for 4.16.0-rc1 #882

Closed
Tracked by #881
adrianriobo opened this issue May 14, 2024 · 3 comments
Closed
Tracked by #881

Error building bundle for 4.16.0-rc1 #882

adrianriobo opened this issue May 14, 2024 · 3 comments
Assignees

Comments

@adrianriobo
Copy link
Contributor

THe cluster can not be stabilized due to MCO

level=debug msg=Cluster Operator machine-config is Progressing=True LastTransitionTime=2024-05-13 08:44:46 +0000 UTC DurationSinceTransition=91620s Reason= Message=Working towards 4.16.0-rc.1
level=error msg=Error checking cluster operator Progressing status: "context deadline exceeded"
level=debug msg=Cluster Operator machine-config is Progressing=True LastTransitionTime=2024-05-13 08:44:46 +0000 UTC DurationSinceTransition=91621s Reason= Message=Working towards 4.16.0-rc.1
level=debug msg=These cluster operators were stable: [authentication, config-operator, console, control-plane-machine-set, dns, etcd, image-registry, ingress, kube-apiserver, kube-controller-manager, kube-scheduler, kube-storage-version-migrator, machine-api, machine-approver, marketplace, network, openshift-apiserver, openshift-controller-manager, openshift-samples, operator-lifecycle-manager, operator-lifecycle-manager-catalog, operator-lifecycle-manager-packageserver, service-ca]
level=error msg=These cluster operators were not stable: [machine-config]
@praveenkumar
Copy link
Member

Looks like node is experiencing the disk pressure which cause some operator to not come up successfully

[core@crc ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        4.0M     0  4.0M   0% /dev
tmpfs           6.9G   84K  6.9G   1% /dev/shm
tmpfs           2.8G   89M  2.7G   4% /run
/dev/vda4        31G   30G  1.4G  96% /sysroot
tmpfs           6.9G   12K  6.9G   1% /tmp
/dev/vda3       350M  111M  217M  34% /boot
tmpfs           1.4G     0  1.4G   0% /run/user/1000

Will need to dig bit more to figure out what causing it.

@praveenkumar
Copy link
Member

Disk pressure only observed during first run after that I did 2-3 more run and didn't produce it. Once it is tested by @adrianriobo we can close it.

@adrianriobo
Copy link
Contributor Author

This seems randomly happen, @praveenkumar was able to build it internally and I tried again it is working as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants