-
Notifications
You must be signed in to change notification settings - Fork 382
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Update to Kubernetes 1.30 #3140
Conversation
🥳 🔥 🎆 |
/retest |
/lgtm |
LGTM label has been added. Git tree hash: 2c7ea837cae12d637bf20e30cd8bef62fb6bd5bc
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mjudeikis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -261,6 +272,16 @@ var BuiltInAPIs = []internalapis.InternalAPI{ | |||
Instance: &admissionregistrationv1alpha1.ValidatingAdmissionPolicyBinding{}, | |||
ResourceScope: apiextensionsv1.ClusterScoped, | |||
}, | |||
{ | |||
Names: apiextensionsv1.CustomResourceDefinitionNames{ | |||
Plural: "validatingadmissionpolicybindings", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether we better want a quick e2e test that checks that the policies work as expected. Namely, the should of course apply to objects in the same logical cluster, but also next to an APIExport? This would mean we have to replicate them via the cache server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely want an e2e test for this (there's a note on it in the PR description), but I didn't want to squeeze it into this PR. Do you think it would be better to do this immediately?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the API is enabled, we should test it. We can disable it and postpone to followup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test has been added. It was in fact not working, I had to fix things on k8s and kcp side. Key commits:
} | ||
|
||
// badCowboy violates the policy, so it should be rejected | ||
t.Logf("Creating bad cowboy resource in first logical cluster") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: combine both Log and comment. Having both is noisy.
t.Logf("Creating bad cowboy in first workspace should fail")
Modulo the e2e style comments, lgtm. |
9d2ab4d
to
291c41d
Compare
Looks like the failing tests aren't a flake:
This isn't always triggered, but it needs to be addressed. |
Addressed the previous error by reworking the internal informer/lister built by the generic policy plugin framework. Should work now. 🤞🏻 |
@@ -136,7 +141,8 @@ func (k *KubeValidatingAdmissionPolicy) Validate(ctx context.Context, a admissio | |||
return err | |||
} | |||
|
|||
return delegate.Validate(ctx, a, o) | |||
err = delegate.Validate(ctx, a, o) | |||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: the previous was fine
plugin.SetSourceFactory(func(_ informers.SharedInformerFactory, client kubernetes.Interface, dynamicClient dynamic.Interface, restMapper meta.RESTMapper, clusterName logicalcluster.Name) generic.Source[validating.PolicyHook] { | ||
return generic.NewPolicySource( | ||
k.globalKubeSharedInformerFactory.Admissionregistration().V1().ValidatingAdmissionPolicies().Informer().Cluster(clusterName), | ||
k.globalKubeSharedInformerFactory.Admissionregistration().V1().ValidatingAdmissionPolicyBindings().Informer().Cluster(clusterName), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you add replication of policies? I would have expected this to fail if not. The local informers have the local shard objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where replication is configured. Through InstallIndexers
, this is is wired into (s *Server) addIndexersToInformers
, which returns the list of replicated GVRs. This is a relatively recent change made in #3139, which was necessary to that informer installation doesn't create a race condition on leader election changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack
}, | ||
} | ||
policy, err = kubeClusterClient.Cluster(ws1Path).AdmissionregistrationV1().ValidatingAdmissionPolicies().Create(ctx, policy, metav1.CreateOptions{}) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit/style: no empty line before error checks.
return true | ||
} | ||
} | ||
t.Logf("Unexpected error when trying to create bad cowboy: %s", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: a success is not unexpected while waiting for the policy to apply.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is within a check for err != nil
, so it only logs any error that's not the policy rejection. If resource creation succeeds, it just returns false
but doesn't log this line.
…add e2e conformance test Signed-off-by: Marvin Beckers <marvin@kubermatic.com>
Signed-off-by: Marvin Beckers <marvin@kubermatic.com>
Signed-off-by: Marvin Beckers <marvin@kubermatic.com>
/retest flake. |
/lgtm |
LGTM label has been added. Git tree hash: 4e71586e672cb06c291d6f5a0b9ff4367f665ba1
|
/hold cancel |
/kind feature |
Summary
This updates kcp to Kubernetes 1.30.0. The corresponding branch for our Kubernetes fork is kcp-1.30. Mainly this PR adds new APIs or removes feature gates where they have been removed by upstream.
The bigger changes in this PR are:
The following bigger TODOs exist, but I would like to tackle them separately:
--authentication-config
, which seems like a hugely useful feature for a multi-tenant API server like KCP.Related issue(s)
Fixes #3068
Release Notes