Skip to content

OCPBUGS-57024: use channel to signal controller shutdown #5104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

cheesesashimi
Copy link
Member

@cheesesashimi cheesesashimi commented Jun 4, 2025

- What I did

Under certain circumstances, the machine-os-builder controller needs to
perform cleanup operations such as deleting any ephemeral objects when
it shuts down. However, the shutdown process must also complete cleanly
because otherwise, the leader lease will not be released.

Instead of using a time delay, this uses a channel to block the shutdown
function until the controller has finished the cleanup process. It
should be noted that we cannot guarantee that the controller will always
complete its shutdown process under all circumstances before the kubelet
forcibly stops the pod. Consequently, we also cannot guarantee that the
leader lease will be released under all circumstances either. However,
this puts us in a much better place than before since we no longer need
a hard-coded time delay.

Additionally, I added a shutdown delay handler to the build controller which, upon receipt of a shutdown signal, interrogates all of the objects that it created. The purpose is to ensure that when a MachineOSConfig is deleted, that the deletion cascades to all of the child objects including all MachineOSBuilds, Jobs, ConfigMaps, and Secrets. It will do this multiple times for up to 10 seconds, at which point it will exit. Specifically, it is looking to ensure that all child objects are either nonexistent or that they are pending deletion; that is, they have a deletion timestamp. However, it does not wait for the deletion to actually complete.

- How to verify it

  1. Opt a cluster into on-cluster layering.
  2. Wait for the machine-os-builder pod and the build pod to start.
  3. Delete all machine-os-configs which should cause the machine-os-builder pod to shut down.
  4. Wait for the machine-os-builder pod to shut down.
  5. Check for any MachineOSBuilds, Jobs, ConfigMaps, Secrets which belong to the machine-os-builder. The objects themselves may be present. And if they are, they should have a deletion timestamp set.
  6. Run $ oc get leases -n openshift-machine-config-operator.
  7. While there should be an entry for the machine-os-builder, it should not have an ID associated with it.
  8. Create a new MachineOSConfig which should cause the machine-os-builder pod to start again.
  9. The machine-os-builder pod should be able to get a leader lease quickly.

- Description for the changelog
Use channel instead of time delay for machine-os-builder shutdown

@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Jun 4, 2025
@openshift-ci-robot
Copy link
Contributor

@cheesesashimi: This pull request references Jira Issue OCPBUGS-57024, which is invalid:

  • expected the bug to target the "4.20.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did

Under certain circumstances, the machine-os-builder controller needs to
perform cleanup operations such as deleting any ephemeral objects when
it shuts down. However, the shutdown process must also complete cleanly
because otherwise, the leader lease will not be released.

Instead of using a time delay, this uses a channel to block the shutdown
function until the controller has finished the cleanup process. It
should be noted that we cannot guarantee that the controller will always
complete its shutdown process under all circumstances before the kubelet
forcibly stops the pod. Consequently, we also cannot guarantee that the
leader lease will be released under all circumstances either. However,
this puts us in a much better place than before since we no longer need
a hard-coded time delay.

- How to verify it

  1. Opt a cluster into on-cluster layering.
  2. Wait for the machine-os-builder pod and the build pod to start.
  3. Delete all machine-os-configs which should cause the machine-os-builder pod to shut down.
  4. Wait for the machine-os-builder pod to shut down.
  5. Run $ oc get leases -n openshift-machine-config-operator.
  6. While there should be an entry for the machine-os-builder, it should not have an ID associated with it.
  7. Create a new MachineOSConfig which should cause the machine-os-builder pod to start again.
  8. The machine-os-builder pod should be able to get a leader lease quickly.

- Description for the changelog
Use channel instead of time delay for machine-os-builder shutdown

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Jun 4, 2025
@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-57024 branch from 8dd30ae to 11aa8da Compare June 4, 2025 13:31
Copy link
Contributor

openshift-ci bot commented Jun 4, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cheesesashimi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 4, 2025
Copy link
Contributor

openshift-ci bot commented Jun 4, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cheesesashimi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@umohnani8
Copy link
Contributor

LGTM

@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-57024 branch 2 times, most recently from de4e63c to e3954aa Compare June 12, 2025 17:52
@cheesesashimi
Copy link
Member Author

/retest

Under certain circumstances, the machine-os-builder controller needs to
perform cleanup operations such as deleting any ephemeral objects when
it shuts down. However, the shutdown process must also complete cleanly
because otherwise, the leader lease will not be released.

Instead of using a time delay, this uses a channel to block the shutdown
function until the controller has finished the cleanup process. It
should be noted that we cannot guarantee that the controller will always
complete its shutdown process under all circumstances before the kubelet
forcibly stops the pod. Consequently, we also cannot guarantee that the
leader lease will be released under all circumstances either. However,
this puts us in a much better place than before since we no longer need
a hard-coded time delay.
@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-57024 branch from e3954aa to d460172 Compare June 13, 2025 15:31
The idea behind shutdowndelayhandler is to check for any orphaned
ephemeral build objects before shutting down the build controller. An
orphaned object in this situation means that the MachineOSConfig and the
MachineOSBuild they are associated with are either deleted or are
pending deletion and the child objects themselves are not pending
deletion. In that situation, we should delay the shutdown process until
the child objects are pending deletion.
@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-57024 branch from d460172 to ab3d5bc Compare June 13, 2025 20:19
@sergiordlr
Copy link
Contributor

sergiordlr commented Jun 18, 2025

We tried to verify it but we found some issues

When the os-builder pod is restarted a new machineosbuild resource is generated, a new image is built and it is applied again causing several restarts on the nodes. It can be reproduced consistently.

It can be manually reproduced. When we enable OCL, when the mosb is finished, the job is removed, and it starts applying the image to the nodes, we can manually delete the osbuilder pod and a new machineosbuild is created.

We tried to reproduce it in 4.20 without this fix, but we were not able to reproduce it without this PR.

@sergiordlr
Copy link
Contributor

sergiordlr commented Jun 27, 2025

Verified using IPI on AWS

We can see the os-builder can take the lease without problems after being evicted while applying the new osImage

I0627 10:14:10.533289       1 leaderelection.go:257] attempting to acquire leader lease openshift-machine-config-operator/machine-os-builder...
I0627 10:14:10.547504       1 leaderelection.go:271] successfully acquired lease openshift-machine-config-operator/machine-os-builder
I0627 10:14:10.558292       1 simple_featuregate_reader.go:171] Starting feature-gate-detector

We can see the os-builder pod releasing the lease when deleted

I0627 10:32:52.777997 1 start.go:90] Stopped leading; machine-os-builder terminating.

We can see that the os-builder pod correctly releases the lease when we remove the MOSC resoruce

I0627 10:50:38.238757       1 reconciler.go:1013] Deleted image quay.io/mcoqe/layering:mosc-worker-f783b652812662f9416e73d0eb8b6b51 from registry for MachineOSBuild mosc-worker-f783b652812662f9416e73d0eb8b6b51
I0627 10:50:38.245589       1 reconciler.go:1117] Finished deleting MachineOSBuild "mosc-worker-f783b652812662f9416e73d0eb8b6b51" after 528.378818ms
I0627 10:50:38.273739       1 shutdowndelayhandler.go:62] All child objects are either deleted, marked for deletion, or not orphaned after 1.008278291s
I0627 10:50:38.273757       1 osbuildcontroller.go:175] OSBuildController has shut down
I0627 10:50:38.273823       1 start.go:90] Stopped leading; machine-os-builder terminating.

All OCL e2e test cases have passed.

The issue described in previous comments is not related to this PR, is is reported here: https://issues.redhat.com/browse/OCPBUGS-58191

We still see some weird behaviour, we still need more testing

@sergiordlr
Copy link
Contributor

We can see the same weird behaviour without this fix.

We can approve it.

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Jun 27, 2025
@pablintino
Copy link
Contributor

/retest-required

Copy link
Contributor

openshift-ci bot commented Jul 1, 2025

@cheesesashimi: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-upgrade-out-of-change ab3d5bc link false /test e2e-aws-ovn-upgrade-out-of-change
ci/prow/e2e-gcp-op-techpreview ab3d5bc link false /test e2e-gcp-op-techpreview
ci/prow/e2e-aws-ovn-windows ab3d5bc link false /test e2e-aws-ovn-windows
ci/prow/e2e-gcp-op-ocl ab3d5bc link false /test e2e-gcp-op-ocl
ci/prow/e2e-azure-ovn-upgrade-out-of-change ab3d5bc link false /test e2e-azure-ovn-upgrade-out-of-change
ci/prow/e2e-gcp-op ab3d5bc link true /test e2e-gcp-op

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants