New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple runtimeclasses in kataconfig.status.runtimeClass #344
Support multiple runtimeclasses in kataconfig.status.runtimeClass #344
Conversation
Hi @pmores, patch looks straightforward. Thanks! Does it impact upgrading the operator? I believe you said it requires removing the kataconfig and then re-applying? Is that because of the change in the .status section of the crd? |
Yes, exactly. Due to the earlier changes in kataconfig.status on this branch, this has likely already been the case. |
This controller can now create multiple RuntimeClasses, for plain kata and for peer pods. However only the original kata RuntimeClass has ever been accounted for in KataConfig.status.runtimeClass so far. This is not merely a cosmetic problem. The controller uses KataConfig.status.runtimeClass to find out whether there are any pods using the kata runtime before it removes the runtime from a cluster on uninstallation. Since the peer pods RuntimeClass was never represented in the field so far, running peer pods were ignored on uninstallation. As for implementation, KataConfig.status.runtimeClass is now optional, not required. This doesn't really have to do with the core idea of this commit, it's more to work around the fact that kubebuilder doesn't seem to support array fields with an empty array as the default value. The removal of KataConfig resource immediate Update() after modifying the runtimeClass field is also not strictly necessary but still included since such Update()s in the middle of reconciliation can cause problems on a subsequent Update() ("object has been modified"). Signed-off-by: Pavel Mores <pmores@redhat.com>
0b7aa34
to
3bdd0e7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks @pmores !
@@ -751,12 +751,8 @@ func (r *KataConfigOpenShiftReconciler) createRuntimeClass(runtimeClassName stri | |||
} | |||
} | |||
|
|||
if r.kataConfig.Status.RuntimeClass == "" { | |||
r.kataConfig.Status.RuntimeClass = runtimeClassName | |||
err = r.Client.Status().Update(context.TODO(), r.kataConfig) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this change as it partly addresses https://issues.redhat.com/browse/KATA-2441.
This will need to be documented in the OSC upgrade guidelines. |
A chunk was unfortunately left out from PR openshift#344 by mistake. We also take this opportunity to split the condition in the daemonset creation error checking to get rid of a long-standing logging annoyance. No error should be logged if getting the DS returns "not found". Signed-off-by: Pavel Mores <pmores@redhat.com>
A chunk was unfortunately left out from PR #344 by mistake. We also take this opportunity to split the condition in the daemonset creation error checking to get rid of a long-standing logging annoyance. No error should be logged if getting the DS returns "not found". Signed-off-by: Pavel Mores <pmores@redhat.com>
A chunk was unfortunately left out from PR openshift#344 by mistake. We also take this opportunity to split the condition in the daemonset creation error checking to get rid of a long-standing logging annoyance. No error should be logged if getting the DS returns "not found". Signed-off-by: Pavel Mores <pmores@redhat.com>
A chunk was unfortunately left out from PR openshift#344 by mistake. We also take this opportunity to split the condition in the daemonset creation error checking to get rid of a long-standing logging annoyance. No error should be logged if getting the DS returns "not found". Signed-off-by: Pavel Mores <pmores@redhat.com>
A chunk was unfortunately left out from PR openshift#344 by mistake. We also take this opportunity to split the condition in the daemonset creation error checking to get rid of a long-standing logging annoyance. No error should be logged if getting the DS returns "not found". Signed-off-by: Pavel Mores <pmores@redhat.com>
PR openshift#344 changed the type of KataConfigStatus::RuntimeClass from string to array of string. This change in the CRD isn't supported and prevents upgrades from an older operator if a KataConfig is present. The new CSV will stay in the `Pending` state forever and the following error is reported by the install plan : message: 'error validating existing CRs against new CRD''s schema for "kataconfigs.kataconfiguration.openshift.io": error validating custom resource against new schema for KataConfig /example-kataconfig: [].status.runtimeClass: Invalid value: "string": status.runtimeClass in body must be of type array: "string"' Simply rename the field to avoid the issue. Signed-off-by: Greg Kurz <groug@kaod.org>
PR openshift#344 changed the type of KataConfigStatus::RuntimeClass from string to array of string. This change in the CRD isn't supported and prevents upgrades from an older operator if a KataConfig is present. The new CSV will stay in the `Pending` state forever and the following error is reported by the install plan : message: 'error validating existing CRs against new CRD''s schema for "kataconfigs.kataconfiguration.openshift.io": error validating custom resource against new schema for KataConfig /example-kataconfig: [].status.runtimeClass: Invalid value: "string": status.runtimeClass in body must be of type array: "string"' Simply rename the field to avoid the issue. Signed-off-by: Greg Kurz <groug@kaod.org>
This controller can now create multiple RuntimeClasses, for plain kata
and for peer pods. However only the original kata RuntimeClass has ever
been accounted for in KataConfig.status.runtimeClass so far.
This is not merely a cosmetic problem. The controller uses
KataConfig.status.runtimeClass to find out whether there are any pods
using the kata runtime before it removes the runtime from a cluster on
uninstallation. Since the peer pods RuntimeClass was never represented
in the field so far, running peer pods were ignored on uninstallation.
As for implementation, KataConfig.status.runtimeClass is now optional,
not required. This doesn't really have to do with the core idea of this
commit, it's more to work around the fact that kubebuilder doesn't seem
to support array fields with an empty array as the default value.
The removal of KataConfig resource immediate Update() after modifying the
runtimeClass field is also not strictly necessary but still included since
such Update()s in the middle of reconciliation can cause problems on a
subsequent Update() ("object has been modified").
Fixes KATA-2294
[EDIT, greg]
Fixes KATA-2441 # Partially
Signed-off-by: Pavel Mores pmores@redhat.com