Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

status not updated when nodes are added/removed from machine config pool #170

Closed
jensfr opened this issue Jan 19, 2022 · 5 comments
Closed
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jensfr
Copy link
Contributor

jensfr commented Jan 19, 2022

Description

the status update (installation/uninstallation) is only updated when the kataconfig CR is
created or deleted. When a custom MCP is used and a node is added or removed it is
not updated even though it triggers enabling/disabling the sandboxed-containers extension
on nodes

Steps to reproduce the issue:

  1. create a kataconfig and specify a node selector
  2. add label to two nodes -> extension will be enabled on those nodes
  3. remove label from one of those nodes -> extension is disabled on this node but status doesn't reflect that operatorion

Describe the results you received:

when removing a label from the node there was no update

Describe the results you expected:

an updated status showing that something is going on

What we could do is:

  • cache the previous size of the mcp and when mcp status is updating and size
    1. increase: clear install status, set installing=true, update install stats, set installing=false when mcp status is 'updated'
    2. decreased: clear uninstall status, set uninstalling=true, update uninstall stats, set uninstalling=false when mcp status is 'updated'
    3. when size is unchanged, but mcp is updating: do 1. and 2 (maybe same number of nodes was removed and added at the same time)
@pmores
Copy link
Contributor

pmores commented Jan 28, 2022

@jensfr how do we learn about what happened though? Removing a label from a node resource won't by itself trigger our reconciliation, will it? Or can we rely on OSC reconciliation being triggered on node label removal in some indirect way? Perhaps we'll need to start to watch Nodes as well.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2022
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 28, 2022
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Jun 27, 2022
@openshift-ci
Copy link

openshift-ci bot commented Jun 27, 2022

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants