New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write ReplicaSet Replace and Patch Test +2 Endpoints #99380
Conversation
/test pull-kubernetes-e2e-kind Unrelated tests failed
|
/assign @soltysh @mattfarina |
/triage accepted |
framework.ExpectNoError(err, "failed to Marshal Deployment JSON patch") | ||
_, err = f.ClientSet.AppsV1().ReplicaSets(ns).Patch(context.TODO(), rsName, types.StrategicMergePatchType, []byte(rsPatch), metav1.PatchOptions{}) | ||
framework.ExpectNoError(err, "failed to patch ReplicaSet") | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Do we want to verify that the patch applied correctly with a get?
- Do we want to wait (
e2ereplicaset.WaitForReadyReplicaSet
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think SIG Apps want e2ereplicaset.WaitForReadyReplicaSet
in the tests. Last time I had to remove it.
Your thoughts @soltysh ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Riaankl we can do the first one at least then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sorry. Two different issues. A get
make sense.
Added and tested it.
Thank you.
c1cc339
to
4cb87a0
Compare
test/e2e/apps/replica_set.go
Outdated
|
||
rs, err = c.AppsV1().ReplicaSets(ns).Get(context.TODO(), rsName, metav1.GetOptions{}) | ||
framework.ExpectNoError(err, "Failed to get replicaset resource: %v", err) | ||
framework.ExpectEqual(*(rs.Spec.Replicas), int32(3), "replicaset should have 3 replicas") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use rsPatchReplicas
instead of int32(3)
?
also do we want to cross check if rsPatchImage
was applied too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update rsPatchReplicas
instead of int32(3)
Just a little stuck on how to check rsPatchImage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got the check for rsPatchImage
in and working.
Thanks for the review @dims
31b2cc1
to
3e3b484
Compare
266b672
to
3e6ba4d
Compare
LGTM over to you @soltysh |
/test pull-kubernetes-e2e-gce-ubuntu-containerd Unrelate
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good for now, but before we promote these endpoint/verb tests to conformance I'd like to see an effort which will results in API tests being similar to
kubernetes/test/e2e/network/ingress.go
Lines 918 to 1167 in a7fe0bf
framework.ConformanceIt("should support creating Ingress API operations", func() { | |
// Setup | |
ns := f.Namespace.Name | |
ingVersion := "v1" | |
ingClient := f.ClientSet.NetworkingV1().Ingresses(ns) | |
prefixPathType := networkingv1.PathTypePrefix | |
serviceBackend := &networkingv1.IngressServiceBackend{ | |
Name: "default-backend", | |
Port: networkingv1.ServiceBackendPort{ | |
Name: "", | |
Number: 8080, | |
}, | |
} | |
defaultBackend := networkingv1.IngressBackend{ | |
Service: serviceBackend, | |
} | |
ingTemplate := &networkingv1.Ingress{ | |
ObjectMeta: metav1.ObjectMeta{GenerateName: "e2e-example-ing", | |
Labels: map[string]string{ | |
"special-label": f.UniqueName, | |
}}, | |
Spec: networkingv1.IngressSpec{ | |
DefaultBackend: &defaultBackend, | |
Rules: []networkingv1.IngressRule{ | |
{ | |
Host: "foo.bar.com", | |
IngressRuleValue: networkingv1.IngressRuleValue{ | |
HTTP: &networkingv1.HTTPIngressRuleValue{ | |
Paths: []networkingv1.HTTPIngressPath{{ | |
Path: "/", | |
PathType: &prefixPathType, | |
Backend: networkingv1.IngressBackend{ | |
Service: &networkingv1.IngressServiceBackend{ | |
Name: "test-backend", | |
Port: networkingv1.ServiceBackendPort{ | |
Number: 8080, | |
}, | |
}, | |
}, | |
}}, | |
}, | |
}, | |
}, | |
}, | |
}, | |
Status: networkingv1.IngressStatus{LoadBalancer: v1.LoadBalancerStatus{}}, | |
} | |
// Discovery | |
ginkgo.By("getting /apis") | |
{ | |
discoveryGroups, err := f.ClientSet.Discovery().ServerGroups() | |
framework.ExpectNoError(err) | |
found := false | |
for _, group := range discoveryGroups.Groups { | |
if group.Name == networkingv1beta1.GroupName { | |
for _, version := range group.Versions { | |
if version.Version == ingVersion { | |
found = true | |
break | |
} | |
} | |
} | |
} | |
framework.ExpectEqual(found, true, fmt.Sprintf("expected networking API group/version, got %#v", discoveryGroups.Groups)) | |
} | |
ginkgo.By("getting /apis/networking.k8s.io") | |
{ | |
group := &metav1.APIGroup{} | |
err := f.ClientSet.Discovery().RESTClient().Get().AbsPath("/apis/networking.k8s.io").Do(context.TODO()).Into(group) | |
framework.ExpectNoError(err) | |
found := false | |
for _, version := range group.Versions { | |
if version.Version == ingVersion { | |
found = true | |
break | |
} | |
} | |
framework.ExpectEqual(found, true, fmt.Sprintf("expected networking API version, got %#v", group.Versions)) | |
} | |
ginkgo.By("getting /apis/networking.k8s.io" + ingVersion) | |
{ | |
resources, err := f.ClientSet.Discovery().ServerResourcesForGroupVersion(networkingv1.SchemeGroupVersion.String()) | |
framework.ExpectNoError(err) | |
foundIngress := false | |
for _, resource := range resources.APIResources { | |
switch resource.Name { | |
case "ingresses": | |
foundIngress = true | |
} | |
} | |
framework.ExpectEqual(foundIngress, true, fmt.Sprintf("expected ingresses, got %#v", resources.APIResources)) | |
} | |
// Ingress resource create/read/update/watch verbs | |
ginkgo.By("creating") | |
_, err := ingClient.Create(context.TODO(), ingTemplate, metav1.CreateOptions{}) | |
framework.ExpectNoError(err) | |
_, err = ingClient.Create(context.TODO(), ingTemplate, metav1.CreateOptions{}) | |
framework.ExpectNoError(err) | |
createdIngress, err := ingClient.Create(context.TODO(), ingTemplate, metav1.CreateOptions{}) | |
framework.ExpectNoError(err) | |
ginkgo.By("getting") | |
gottenIngress, err := ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{}) | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(gottenIngress.UID, createdIngress.UID) | |
ginkgo.By("listing") | |
ings, err := ingClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(len(ings.Items), 3, "filtered list should have 3 items") | |
ginkgo.By("watching") | |
framework.Logf("starting watch") | |
ingWatch, err := ingClient.Watch(context.TODO(), metav1.ListOptions{ResourceVersion: ings.ResourceVersion, LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
// Test cluster-wide list and watch | |
clusterIngClient := f.ClientSet.NetworkingV1().Ingresses("") | |
ginkgo.By("cluster-wide listing") | |
clusterIngs, err := clusterIngClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(len(clusterIngs.Items), 3, "filtered list should have 3 items") | |
ginkgo.By("cluster-wide watching") | |
framework.Logf("starting watch") | |
_, err = clusterIngClient.Watch(context.TODO(), metav1.ListOptions{ResourceVersion: ings.ResourceVersion, LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
ginkgo.By("patching") | |
patchedIngress, err := ingClient.Patch(context.TODO(), createdIngress.Name, types.MergePatchType, []byte(`{"metadata":{"annotations":{"patched":"true"}}}`), metav1.PatchOptions{}) | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(patchedIngress.Annotations["patched"], "true", "patched object should have the applied annotation") | |
ginkgo.By("updating") | |
var ingToUpdate, updatedIngress *networkingv1.Ingress | |
err = retry.RetryOnConflict(retry.DefaultRetry, func() error { | |
ingToUpdate, err = ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{}) | |
if err != nil { | |
return err | |
} | |
ingToUpdate.Annotations["updated"] = "true" | |
updatedIngress, err = ingClient.Update(context.TODO(), ingToUpdate, metav1.UpdateOptions{}) | |
return err | |
}) | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(updatedIngress.Annotations["updated"], "true", "updated object should have the applied annotation") | |
framework.Logf("waiting for watch events with expected annotations") | |
for sawAnnotations := false; !sawAnnotations; { | |
select { | |
case evt, ok := <-ingWatch.ResultChan(): | |
framework.ExpectEqual(ok, true, "watch channel should not close") | |
framework.ExpectEqual(evt.Type, watch.Modified) | |
watchedIngress, isIngress := evt.Object.(*networkingv1.Ingress) | |
framework.ExpectEqual(isIngress, true, fmt.Sprintf("expected Ingress, got %T", evt.Object)) | |
if watchedIngress.Annotations["patched"] == "true" { | |
framework.Logf("saw patched and updated annotations") | |
sawAnnotations = true | |
ingWatch.Stop() | |
} else { | |
framework.Logf("missing expected annotations, waiting: %#v", watchedIngress.Annotations) | |
} | |
case <-time.After(wait.ForeverTestTimeout): | |
framework.Fail("timed out waiting for watch event") | |
} | |
} | |
// /status subresource operations | |
ginkgo.By("patching /status") | |
lbStatus := v1.LoadBalancerStatus{ | |
Ingress: []v1.LoadBalancerIngress{{IP: "169.1.1.1"}}, | |
} | |
lbStatusJSON, err := json.Marshal(lbStatus) | |
framework.ExpectNoError(err) | |
patchedStatus, err := ingClient.Patch(context.TODO(), createdIngress.Name, types.MergePatchType, | |
[]byte(`{"metadata":{"annotations":{"patchedstatus":"true"}},"status":{"loadBalancer":`+string(lbStatusJSON)+`}}`), | |
metav1.PatchOptions{}, "status") | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(patchedStatus.Status.LoadBalancer, lbStatus, "patched object should have the applied loadBalancer status") | |
framework.ExpectEqual(patchedStatus.Annotations["patchedstatus"], "true", "patched object should have the applied annotation") | |
ginkgo.By("updating /status") | |
var statusToUpdate, updatedStatus *networkingv1.Ingress | |
err = retry.RetryOnConflict(retry.DefaultRetry, func() error { | |
statusToUpdate, err = ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{}) | |
if err != nil { | |
return err | |
} | |
statusToUpdate.Status.LoadBalancer = v1.LoadBalancerStatus{ | |
Ingress: []v1.LoadBalancerIngress{{IP: "169.1.1.2"}}, | |
} | |
updatedStatus, err = ingClient.UpdateStatus(context.TODO(), statusToUpdate, metav1.UpdateOptions{}) | |
return err | |
}) | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(updatedStatus.Status.LoadBalancer, statusToUpdate.Status.LoadBalancer, fmt.Sprintf("updated object expected to have updated loadbalancer status %#v, got %#v", statusToUpdate.Status.LoadBalancer, updatedStatus.Status.LoadBalancer)) | |
ginkgo.By("get /status") | |
ingResource := schema.GroupVersionResource{Group: "networking.k8s.io", Version: ingVersion, Resource: "ingresses"} | |
gottenStatus, err := f.DynamicClient.Resource(ingResource).Namespace(ns).Get(context.TODO(), createdIngress.Name, metav1.GetOptions{}, "status") | |
framework.ExpectNoError(err) | |
statusUID, _, err := unstructured.NestedFieldCopy(gottenStatus.Object, "metadata", "uid") | |
framework.ExpectNoError(err) | |
framework.ExpectEqual(string(createdIngress.UID), statusUID, fmt.Sprintf("createdIngress.UID: %v expected to match statusUID: %v ", createdIngress.UID, statusUID)) | |
// Ingress resource delete operations | |
ginkgo.By("deleting") | |
expectFinalizer := func(ing *networkingv1.Ingress, msg string) { | |
framework.ExpectNotEqual(ing.DeletionTimestamp, nil, fmt.Sprintf("expected deletionTimestamp, got nil on step: %q, ingress: %+v", msg, ing)) | |
framework.ExpectEqual(len(ing.Finalizers) > 0, true, fmt.Sprintf("expected finalizers on ingress, got none on step: %q, ingress: %+v", msg, ing)) | |
} | |
err = ingClient.Delete(context.TODO(), createdIngress.Name, metav1.DeleteOptions{}) | |
framework.ExpectNoError(err) | |
ing, err := ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{}) | |
// If ingress controller does not support finalizers, we expect a 404. Otherwise we validate finalizer behavior. | |
if err == nil { | |
expectFinalizer(ing, "deleting createdIngress") | |
} else { | |
framework.ExpectEqual(apierrors.IsNotFound(err), true, fmt.Sprintf("expected 404, got %v", err)) | |
} | |
ings, err = ingClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
// Should have <= 3 items since some ingresses might not have been deleted yet due to finalizers | |
framework.ExpectEqual(len(ings.Items) <= 3, true, "filtered list should have <= 3 items") | |
// Validate finalizer on the deleted ingress | |
for _, ing := range ings.Items { | |
if ing.Namespace == createdIngress.Namespace && ing.Name == createdIngress.Name { | |
expectFinalizer(&ing, "listing after deleting createdIngress") | |
} | |
} | |
ginkgo.By("deleting a collection") | |
err = ingClient.DeleteCollection(context.TODO(), metav1.DeleteOptions{}, metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
ings, err = ingClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName}) | |
framework.ExpectNoError(err) | |
// Should have <= 3 items since some ingresses might not have been deleted yet due to finalizers | |
framework.ExpectEqual(len(ings.Items) <= 3, true, "filtered list should have <= 3 items") | |
// Validate finalizers | |
for _, ing := range ings.Items { | |
expectFinalizer(&ing, "deleting ingress collection") | |
} | |
}) |
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Riaankl, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-kubernetes-e2e-gce-ubuntu-containerd Unrelated flake
|
/test pull-kubernetes-e2e-kind
|
/test pull-kubernetes-e2e-kind Unrelated flake
|
/retest Review the full test history for this PR. Silence the bot with an |
1 similar comment
/retest Review the full test history for this PR. Silence the bot with an |
/test pull-kubernetes-node-e2e Unrelated flake
|
/test pull-kubernetes-node-e2e Unrelated flake
|
Unrelated flake - Will give some time and retest
|
/retest Review the full test history for this PR. Silence the bot with an |
Unrelated flake
|
/test pull-kubernetes-node-e2e |
/retest Review the full test history for this PR. Silence the bot with an |
Unrelated flakes
|
/test pull-kubernetes-e2e-kind |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This PR adds a test to test the following untested endpoints:
Which issue(s) this PR fixes:
Fixes #99134
Testgrid Link:
Link
Special notes for your reviewer:
Adds +2 endpoint test coverage (good for conformance)
Does this PR introduce a user-facing change?:
Release note:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/sig testing
/sig architecture
/sig apps
/area conformance