Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple runtimeclasses in kataconfig.status.runtimeClass #344

Conversation

pmores
Copy link
Contributor

@pmores pmores commented Sep 22, 2023

This controller can now create multiple RuntimeClasses, for plain kata
and for peer pods. However only the original kata RuntimeClass has ever
been accounted for in KataConfig.status.runtimeClass so far.

This is not merely a cosmetic problem. The controller uses
KataConfig.status.runtimeClass to find out whether there are any pods
using the kata runtime before it removes the runtime from a cluster on
uninstallation. Since the peer pods RuntimeClass was never represented
in the field so far, running peer pods were ignored on uninstallation.

As for implementation, KataConfig.status.runtimeClass is now optional,
not required. This doesn't really have to do with the core idea of this
commit, it's more to work around the fact that kubebuilder doesn't seem
to support array fields with an empty array as the default value.

The removal of KataConfig resource immediate Update() after modifying the
runtimeClass field is also not strictly necessary but still included since
such Update()s in the middle of reconciliation can cause problems on a
subsequent Update() ("object has been modified").

Fixes KATA-2294

[EDIT, greg]
Fixes KATA-2441 # Partially

Signed-off-by: Pavel Mores pmores@redhat.com

@jensfr
Copy link
Contributor

jensfr commented Sep 22, 2023

Hi @pmores, patch looks straightforward. Thanks! Does it impact upgrading the operator? I believe you said it requires removing the kataconfig and then re-applying? Is that because of the change in the .status section of the crd?

@pmores
Copy link
Contributor Author

pmores commented Sep 22, 2023

Hi @pmores, patch looks straightforward. Thanks! Does it impact upgrading the operator? I believe you said it requires removing the kataconfig and then re-applying? Is that because of the change in the .status section of the crd?

Yes, exactly. Due to the earlier changes in kataconfig.status on this branch, this has likely already been the case.

This controller can now create multiple RuntimeClasses, for plain kata
and for peer pods.  However only the original kata RuntimeClass has ever
been accounted for in KataConfig.status.runtimeClass so far.

This is not merely a cosmetic problem.  The controller uses
KataConfig.status.runtimeClass to find out whether there are any pods
using the kata runtime before it removes the runtime from a cluster on
uninstallation.  Since the peer pods RuntimeClass was never represented
in the field so far, running peer pods were ignored on uninstallation.

As for implementation, KataConfig.status.runtimeClass is now optional,
not required.  This doesn't really have to do with the core idea of this
commit, it's more to work around the fact that kubebuilder doesn't seem
to support array fields with an empty array as the default value.

The removal of KataConfig resource immediate Update() after modifying the
runtimeClass field is also not strictly necessary but still included since
such Update()s in the middle of reconciliation can cause problems on a
subsequent Update() ("object has been modified").

Signed-off-by: Pavel Mores <pmores@redhat.com>
@pmores pmores force-pushed the support-multiple-runtimeclasses-in-kataconfig-status branch from 0b7aa34 to 3bdd0e7 Compare September 22, 2023 15:31
Copy link
Member

@gkurz gkurz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @pmores !

@@ -751,12 +751,8 @@ func (r *KataConfigOpenShiftReconciler) createRuntimeClass(runtimeClassName stri
}
}

if r.kataConfig.Status.RuntimeClass == "" {
r.kataConfig.Status.RuntimeClass = runtimeClassName
err = r.Client.Status().Update(context.TODO(), r.kataConfig)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with this change as it partly addresses https://issues.redhat.com/browse/KATA-2441.

@gkurz
Copy link
Member

gkurz commented Sep 22, 2023

Hi @pmores, patch looks straightforward. Thanks! Does it impact upgrading the operator? I believe you said it requires removing the kataconfig and then re-applying? Is that because of the change in the .status section of the crd?

Yes, exactly. Due to the earlier changes in kataconfig.status on this branch, this has likely already been the case.

This will need to be documented in the OSC upgrade guidelines.

@gkurz gkurz merged commit a3903ac into openshift:devel Sep 22, 2023
pmores added a commit to pmores/sandboxed-containers-operator that referenced this pull request Sep 25, 2023
A chunk was unfortunately left out from PR openshift#344 by mistake.

We also take this opportunity to split the condition in the daemonset
creation error checking to get rid of a long-standing logging annoyance.
No error should be logged if getting the DS returns "not found".

Signed-off-by: Pavel Mores <pmores@redhat.com>
@pmores pmores mentioned this pull request Sep 25, 2023
pmores added a commit that referenced this pull request Sep 25, 2023
A chunk was unfortunately left out from PR #344 by mistake.

We also take this opportunity to split the condition in the daemonset
creation error checking to get rid of a long-standing logging annoyance.
No error should be logged if getting the DS returns "not found".

Signed-off-by: Pavel Mores <pmores@redhat.com>
snir911 pushed a commit to snir911/sandboxed-containers-operator that referenced this pull request Oct 11, 2023
A chunk was unfortunately left out from PR openshift#344 by mistake.

We also take this opportunity to split the condition in the daemonset
creation error checking to get rid of a long-standing logging annoyance.
No error should be logged if getting the DS returns "not found".

Signed-off-by: Pavel Mores <pmores@redhat.com>
snir911 pushed a commit to snir911/sandboxed-containers-operator that referenced this pull request Oct 12, 2023
A chunk was unfortunately left out from PR openshift#344 by mistake.

We also take this opportunity to split the condition in the daemonset
creation error checking to get rid of a long-standing logging annoyance.
No error should be logged if getting the DS returns "not found".

Signed-off-by: Pavel Mores <pmores@redhat.com>
snir911 pushed a commit to snir911/sandboxed-containers-operator that referenced this pull request Oct 17, 2023
A chunk was unfortunately left out from PR openshift#344 by mistake.

We also take this opportunity to split the condition in the daemonset
creation error checking to get rid of a long-standing logging annoyance.
No error should be logged if getting the DS returns "not found".

Signed-off-by: Pavel Mores <pmores@redhat.com>
gkurz added a commit to gkurz/sandboxed-containers-operator that referenced this pull request Dec 7, 2023
PR openshift#344 changed the type of KataConfigStatus::RuntimeClass from string
to array of string. This change in the CRD isn't supported and prevents
upgrades from an older operator if a KataConfig is present. The new CSV
will stay in the `Pending` state forever and the following error is
reported by the install plan :

message: 'error validating existing CRs against new CRD''s schema for "kataconfigs.kataconfiguration.openshift.io":
      error validating custom resource against new schema for KataConfig /example-kataconfig:
      [].status.runtimeClass: Invalid value: "string": status.runtimeClass in body
      must be of type array: "string"'

Simply rename the field to avoid the issue.

Signed-off-by: Greg Kurz <groug@kaod.org>
beraldoleal pushed a commit to beraldoleal/sandboxed-containers-operator that referenced this pull request Jan 4, 2024
PR openshift#344 changed the type of KataConfigStatus::RuntimeClass from string
to array of string. This change in the CRD isn't supported and prevents
upgrades from an older operator if a KataConfig is present. The new CSV
will stay in the `Pending` state forever and the following error is
reported by the install plan :

message: 'error validating existing CRs against new CRD''s schema for "kataconfigs.kataconfiguration.openshift.io":
      error validating custom resource against new schema for KataConfig /example-kataconfig:
      [].status.runtimeClass: Invalid value: "string": status.runtimeClass in body
      must be of type array: "string"'

Simply rename the field to avoid the issue.

Signed-off-by: Greg Kurz <groug@kaod.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants