Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write ReplicaSet Replace and Patch Test +2 Endpoints #99380

Merged
merged 1 commit into from Feb 25, 2021

Conversation

Riaankl
Copy link
Contributor

@Riaankl Riaankl commented Feb 23, 2021

What type of PR is this?
/kind cleanup

What this PR does / why we need it:
This PR adds a test to test the following untested endpoints:

  • replaceAppsV1NamespacedReplicaSet
  • patchAppsV1NamespacedReplicaSet

Which issue(s) this PR fixes:
Fixes #99134

Testgrid Link:
Link

Special notes for your reviewer:
Adds +2 endpoint test coverage (good for conformance)

Does this PR introduce a user-facing change?:

NONE

Release note:

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

NONE

/sig testing
/sig architecture
/sig apps
/area conformance

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. sig/testing Categorizes an issue or PR as relevant to SIG Testing. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/apps Categorizes an issue or PR as relevant to SIG Apps. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. area/conformance Issues or PRs related to kubernetes conformance tests needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Feb 23, 2021
@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 23, 2021

/test pull-kubernetes-e2e-kind

Unrelated tests failed

Kubernetes e2e suite: [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 expand_more5m37s | Kubernetes e2e suite: [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 expand_more | 5m37s
Kubernetes e2e suite: [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 expand_more | 5m37s
Kubernetes e2e suite: [sig-cli] Kubectl client Simple pod should return command exit codes expand_more4m18s | Kubernetes e2e suite: [sig-cli] Kubectl client Simple pod should return command exit codes expand_more | 4m18s
Kubernetes e2e suite: [sig-cli] Kubectl client Simple pod should return command exit codes expand_more | 4m18s
Kubernetes e2e suite: [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on expand_more5m36s | Kubernetes e2e suite: [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on expand_more | 5m36s
Kubernetes e2e suite: [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on expand_more | 5m36s
Kubernetes e2e suite: [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] expand_more5m5s | Kubernetes e2e suite: [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] expand_more | 5m5s
Kubernetes e2e suite: [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] expand_more | 5m5s
Kubernetes e2e suite: [sig-apps] Deployment iterative rollouts should eventually progress expand_more5m49s | Kubernetes e2e suite: [sig-apps] Deployment iterative rollouts should eventually progress expand_more | 5m49s
Kubernetes e2e suite: [sig-apps] Deployment iterative rollouts should eventually progress expand_more | 5m49s
Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] expand_more | Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] expand_more
Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] expand_more

@Riaankl Riaankl added this to PRs Needing Review in conformance-definition Feb 23, 2021
@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 23, 2021

/assign @soltysh @mattfarina

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 23, 2021

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 23, 2021
test/e2e/apps/replica_set.go Outdated Show resolved Hide resolved
test/e2e/apps/replica_set.go Outdated Show resolved Hide resolved
test/e2e/apps/replica_set.go Outdated Show resolved Hide resolved
framework.ExpectNoError(err, "failed to Marshal Deployment JSON patch")
_, err = f.ClientSet.AppsV1().ReplicaSets(ns).Patch(context.TODO(), rsName, types.StrategicMergePatchType, []byte(rsPatch), metav1.PatchOptions{})
framework.ExpectNoError(err, "failed to patch ReplicaSet")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Do we want to verify that the patch applied correctly with a get?
  • Do we want to wait (e2ereplicaset.WaitForReadyReplicaSet)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think SIG Apps want e2ereplicaset.WaitForReadyReplicaSet in the tests. Last time I had to remove it.
Your thoughts @soltysh ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Riaankl we can do the first one at least then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, sorry. Two different issues. A get make sense.
Added and tested it.
Thank you.

@Riaankl Riaankl force-pushed the Riaankl_ReplicaSet branch 3 times, most recently from c1cc339 to 4cb87a0 Compare February 24, 2021 18:17

rs, err = c.AppsV1().ReplicaSets(ns).Get(context.TODO(), rsName, metav1.GetOptions{})
framework.ExpectNoError(err, "Failed to get replicaset resource: %v", err)
framework.ExpectEqual(*(rs.Spec.Replicas), int32(3), "replicaset should have 3 replicas")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use rsPatchReplicas instead of int32(3)?

also do we want to cross check if rsPatchImage was applied too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update rsPatchReplicas instead of int32(3)
Just a little stuck on how to check rsPatchImage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got the check for rsPatchImage in and working.
Thanks for the review @dims

@dims
Copy link
Member

dims commented Feb 24, 2021

LGTM over to you @soltysh

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 24, 2021

/test pull-kubernetes-e2e-gce-ubuntu-containerd

Unrelate Network and Storage test flakes

Kubernetes e2e suite: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly] expand_more7m1s | Kubernetes e2e suite: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly] expand_more | 7m1s
Kubernetes e2e suite: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly] expand_more | 7m1s
Kubernetes e2e suite: [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector expand_more2m44s | Kubernetes e2e suite: [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector expand_more | 2m44s
Kubernetes e2e suite: [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector expand_more | 2m44s
Kubernetes e2e suite: [sig-network] Networking Granular Checks: Services should update endpoints: http expand_more3m2s | Kubernetes e2e suite: [sig-network] Networking Granular Checks: Services should update endpoints: http expand_more | 3m2s
Kubernetes e2e suite: [sig-network] Networking Granular Checks: Services should update endpoints: http expand_more | 3m2s
Kubernetes e2e suite: [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols expand_more2m49s | Kubernetes e2e suite: [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols expand_more | 2m49s
Kubernetes e2e suite: [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols expand_more | 2m49s
Kubernetes e2e suite: [k8s.io] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] expand_more4m24s | Kubernetes e2e suite: [k8s.io] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] expand_more | 4m24s
Kubernetes e2e suite: [k8s.io] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] expand_more | 4m24s
Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory expand_more5m34s | Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory expand_more | 5m34s
Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory expand_more | 5m34s
Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly] expand_more2m57s | Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly] expand_more | 2m57s
Kubernetes e2e suite: [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly] expand_more | 2m57s
e2e.go: Test expand_less22m51serror during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]\|\[Serial\]\|\[Disruptive\]\|\[Flaky\]\|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1 | e2e.go: Test expand_less | 22m51s | error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]\|\[Serial\]\|\[Disruptive\]\|\[Flaky\]\|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
e2e.go: Test expand_less | 22m51s
error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]\|\[Serial\]\|\[Disruptive\]\|\[Flaky\]\|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good for now, but before we promote these endpoint/verb tests to conformance I'd like to see an effort which will results in API tests being similar to

framework.ConformanceIt("should support creating Ingress API operations", func() {
// Setup
ns := f.Namespace.Name
ingVersion := "v1"
ingClient := f.ClientSet.NetworkingV1().Ingresses(ns)
prefixPathType := networkingv1.PathTypePrefix
serviceBackend := &networkingv1.IngressServiceBackend{
Name: "default-backend",
Port: networkingv1.ServiceBackendPort{
Name: "",
Number: 8080,
},
}
defaultBackend := networkingv1.IngressBackend{
Service: serviceBackend,
}
ingTemplate := &networkingv1.Ingress{
ObjectMeta: metav1.ObjectMeta{GenerateName: "e2e-example-ing",
Labels: map[string]string{
"special-label": f.UniqueName,
}},
Spec: networkingv1.IngressSpec{
DefaultBackend: &defaultBackend,
Rules: []networkingv1.IngressRule{
{
Host: "foo.bar.com",
IngressRuleValue: networkingv1.IngressRuleValue{
HTTP: &networkingv1.HTTPIngressRuleValue{
Paths: []networkingv1.HTTPIngressPath{{
Path: "/",
PathType: &prefixPathType,
Backend: networkingv1.IngressBackend{
Service: &networkingv1.IngressServiceBackend{
Name: "test-backend",
Port: networkingv1.ServiceBackendPort{
Number: 8080,
},
},
},
}},
},
},
},
},
},
Status: networkingv1.IngressStatus{LoadBalancer: v1.LoadBalancerStatus{}},
}
// Discovery
ginkgo.By("getting /apis")
{
discoveryGroups, err := f.ClientSet.Discovery().ServerGroups()
framework.ExpectNoError(err)
found := false
for _, group := range discoveryGroups.Groups {
if group.Name == networkingv1beta1.GroupName {
for _, version := range group.Versions {
if version.Version == ingVersion {
found = true
break
}
}
}
}
framework.ExpectEqual(found, true, fmt.Sprintf("expected networking API group/version, got %#v", discoveryGroups.Groups))
}
ginkgo.By("getting /apis/networking.k8s.io")
{
group := &metav1.APIGroup{}
err := f.ClientSet.Discovery().RESTClient().Get().AbsPath("/apis/networking.k8s.io").Do(context.TODO()).Into(group)
framework.ExpectNoError(err)
found := false
for _, version := range group.Versions {
if version.Version == ingVersion {
found = true
break
}
}
framework.ExpectEqual(found, true, fmt.Sprintf("expected networking API version, got %#v", group.Versions))
}
ginkgo.By("getting /apis/networking.k8s.io" + ingVersion)
{
resources, err := f.ClientSet.Discovery().ServerResourcesForGroupVersion(networkingv1.SchemeGroupVersion.String())
framework.ExpectNoError(err)
foundIngress := false
for _, resource := range resources.APIResources {
switch resource.Name {
case "ingresses":
foundIngress = true
}
}
framework.ExpectEqual(foundIngress, true, fmt.Sprintf("expected ingresses, got %#v", resources.APIResources))
}
// Ingress resource create/read/update/watch verbs
ginkgo.By("creating")
_, err := ingClient.Create(context.TODO(), ingTemplate, metav1.CreateOptions{})
framework.ExpectNoError(err)
_, err = ingClient.Create(context.TODO(), ingTemplate, metav1.CreateOptions{})
framework.ExpectNoError(err)
createdIngress, err := ingClient.Create(context.TODO(), ingTemplate, metav1.CreateOptions{})
framework.ExpectNoError(err)
ginkgo.By("getting")
gottenIngress, err := ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{})
framework.ExpectNoError(err)
framework.ExpectEqual(gottenIngress.UID, createdIngress.UID)
ginkgo.By("listing")
ings, err := ingClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
framework.ExpectEqual(len(ings.Items), 3, "filtered list should have 3 items")
ginkgo.By("watching")
framework.Logf("starting watch")
ingWatch, err := ingClient.Watch(context.TODO(), metav1.ListOptions{ResourceVersion: ings.ResourceVersion, LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
// Test cluster-wide list and watch
clusterIngClient := f.ClientSet.NetworkingV1().Ingresses("")
ginkgo.By("cluster-wide listing")
clusterIngs, err := clusterIngClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
framework.ExpectEqual(len(clusterIngs.Items), 3, "filtered list should have 3 items")
ginkgo.By("cluster-wide watching")
framework.Logf("starting watch")
_, err = clusterIngClient.Watch(context.TODO(), metav1.ListOptions{ResourceVersion: ings.ResourceVersion, LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
ginkgo.By("patching")
patchedIngress, err := ingClient.Patch(context.TODO(), createdIngress.Name, types.MergePatchType, []byte(`{"metadata":{"annotations":{"patched":"true"}}}`), metav1.PatchOptions{})
framework.ExpectNoError(err)
framework.ExpectEqual(patchedIngress.Annotations["patched"], "true", "patched object should have the applied annotation")
ginkgo.By("updating")
var ingToUpdate, updatedIngress *networkingv1.Ingress
err = retry.RetryOnConflict(retry.DefaultRetry, func() error {
ingToUpdate, err = ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{})
if err != nil {
return err
}
ingToUpdate.Annotations["updated"] = "true"
updatedIngress, err = ingClient.Update(context.TODO(), ingToUpdate, metav1.UpdateOptions{})
return err
})
framework.ExpectNoError(err)
framework.ExpectEqual(updatedIngress.Annotations["updated"], "true", "updated object should have the applied annotation")
framework.Logf("waiting for watch events with expected annotations")
for sawAnnotations := false; !sawAnnotations; {
select {
case evt, ok := <-ingWatch.ResultChan():
framework.ExpectEqual(ok, true, "watch channel should not close")
framework.ExpectEqual(evt.Type, watch.Modified)
watchedIngress, isIngress := evt.Object.(*networkingv1.Ingress)
framework.ExpectEqual(isIngress, true, fmt.Sprintf("expected Ingress, got %T", evt.Object))
if watchedIngress.Annotations["patched"] == "true" {
framework.Logf("saw patched and updated annotations")
sawAnnotations = true
ingWatch.Stop()
} else {
framework.Logf("missing expected annotations, waiting: %#v", watchedIngress.Annotations)
}
case <-time.After(wait.ForeverTestTimeout):
framework.Fail("timed out waiting for watch event")
}
}
// /status subresource operations
ginkgo.By("patching /status")
lbStatus := v1.LoadBalancerStatus{
Ingress: []v1.LoadBalancerIngress{{IP: "169.1.1.1"}},
}
lbStatusJSON, err := json.Marshal(lbStatus)
framework.ExpectNoError(err)
patchedStatus, err := ingClient.Patch(context.TODO(), createdIngress.Name, types.MergePatchType,
[]byte(`{"metadata":{"annotations":{"patchedstatus":"true"}},"status":{"loadBalancer":`+string(lbStatusJSON)+`}}`),
metav1.PatchOptions{}, "status")
framework.ExpectNoError(err)
framework.ExpectEqual(patchedStatus.Status.LoadBalancer, lbStatus, "patched object should have the applied loadBalancer status")
framework.ExpectEqual(patchedStatus.Annotations["patchedstatus"], "true", "patched object should have the applied annotation")
ginkgo.By("updating /status")
var statusToUpdate, updatedStatus *networkingv1.Ingress
err = retry.RetryOnConflict(retry.DefaultRetry, func() error {
statusToUpdate, err = ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{})
if err != nil {
return err
}
statusToUpdate.Status.LoadBalancer = v1.LoadBalancerStatus{
Ingress: []v1.LoadBalancerIngress{{IP: "169.1.1.2"}},
}
updatedStatus, err = ingClient.UpdateStatus(context.TODO(), statusToUpdate, metav1.UpdateOptions{})
return err
})
framework.ExpectNoError(err)
framework.ExpectEqual(updatedStatus.Status.LoadBalancer, statusToUpdate.Status.LoadBalancer, fmt.Sprintf("updated object expected to have updated loadbalancer status %#v, got %#v", statusToUpdate.Status.LoadBalancer, updatedStatus.Status.LoadBalancer))
ginkgo.By("get /status")
ingResource := schema.GroupVersionResource{Group: "networking.k8s.io", Version: ingVersion, Resource: "ingresses"}
gottenStatus, err := f.DynamicClient.Resource(ingResource).Namespace(ns).Get(context.TODO(), createdIngress.Name, metav1.GetOptions{}, "status")
framework.ExpectNoError(err)
statusUID, _, err := unstructured.NestedFieldCopy(gottenStatus.Object, "metadata", "uid")
framework.ExpectNoError(err)
framework.ExpectEqual(string(createdIngress.UID), statusUID, fmt.Sprintf("createdIngress.UID: %v expected to match statusUID: %v ", createdIngress.UID, statusUID))
// Ingress resource delete operations
ginkgo.By("deleting")
expectFinalizer := func(ing *networkingv1.Ingress, msg string) {
framework.ExpectNotEqual(ing.DeletionTimestamp, nil, fmt.Sprintf("expected deletionTimestamp, got nil on step: %q, ingress: %+v", msg, ing))
framework.ExpectEqual(len(ing.Finalizers) > 0, true, fmt.Sprintf("expected finalizers on ingress, got none on step: %q, ingress: %+v", msg, ing))
}
err = ingClient.Delete(context.TODO(), createdIngress.Name, metav1.DeleteOptions{})
framework.ExpectNoError(err)
ing, err := ingClient.Get(context.TODO(), createdIngress.Name, metav1.GetOptions{})
// If ingress controller does not support finalizers, we expect a 404. Otherwise we validate finalizer behavior.
if err == nil {
expectFinalizer(ing, "deleting createdIngress")
} else {
framework.ExpectEqual(apierrors.IsNotFound(err), true, fmt.Sprintf("expected 404, got %v", err))
}
ings, err = ingClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
// Should have <= 3 items since some ingresses might not have been deleted yet due to finalizers
framework.ExpectEqual(len(ings.Items) <= 3, true, "filtered list should have <= 3 items")
// Validate finalizer on the deleted ingress
for _, ing := range ings.Items {
if ing.Namespace == createdIngress.Namespace && ing.Name == createdIngress.Name {
expectFinalizer(&ing, "listing after deleting createdIngress")
}
}
ginkgo.By("deleting a collection")
err = ingClient.DeleteCollection(context.TODO(), metav1.DeleteOptions{}, metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
ings, err = ingClient.List(context.TODO(), metav1.ListOptions{LabelSelector: "special-label=" + f.UniqueName})
framework.ExpectNoError(err)
// Should have <= 3 items since some ingresses might not have been deleted yet due to finalizers
framework.ExpectEqual(len(ings.Items) <= 3, true, "filtered list should have <= 3 items")
// Validate finalizers
for _, ing := range ings.Items {
expectFinalizer(&ing, "deleting ingress collection")
}
})
I've started doing that for cronjobs and I'd love see this being implemented for all other apps endpoints as a single test too.

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 24, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Riaankl, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 24, 2021
@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 24, 2021

/test pull-kubernetes-e2e-gce-ubuntu-containerd

Unrelated flake

• Failure [123.131 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should run through the lifecycle of Pods and PodStatus [Conformance] [It]
  test/e2e/framework/framework.go:640
  Feb 24 21:57:13.098: failed to see Pod pod-test in namespace pods-1725 running
      
  Unexpected error:
      <*errors.errorString | 0xc0001c4240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 24, 2021

/test pull-kubernetes-e2e-kind
Unrelated flake

• Failure [123.131 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should run through the lifecycle of Pods and PodStatus [Conformance] [It]
  test/e2e/framework/framework.go:640
  Feb 24 21:57:13.098: failed to see Pod pod-test in namespace pods-1725 running
      
  Unexpected error:
      <*errors.errorString | 0xc0001c4240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 24, 2021

/test pull-kubernetes-e2e-kind

Unrelated flake

• Failure [123.131 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should run through the lifecycle of Pods and PodStatus [Conformance] [It]
  test/e2e/framework/framework.go:640
  Feb 24 21:57:13.098: failed to see Pod pod-test in namespace pods-1725 running
      
  Unexpected error:
      <*errors.errorString | 0xc0001c4240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

/test pull-kubernetes-node-e2e

Unrelated flake

Detecting project
I0225 06:57:45.691] Project: k8s-infra-e2e-boskos-090
I0225 06:57:45.691] Network Project: k8s-infra-e2e-boskos-090
I0225 06:57:45.691] Zone: us-west1-b
I0225 06:57:45.691] Dumping logs from master locally to '/workspace/_artifacts'
W0225 06:57:46.326] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0225 06:57:46.326]  - The resource 'projects/k8s-infra-e2e-boskos-090/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0225 06:57:46.326] 
W0225 06:57:46.433] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0225 06:57:46.533] Master not detected. Is the cluster up?
I0225 06:57:46.533] Dumping logs from nodes locally to '/workspace/_artifacts'

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

/test pull-kubernetes-node-e2e

Unrelated flake

I0225 08:16:35.853] Detecting project
I0225 08:16:35.853] Project: k8s-infra-e2e-boskos-035
I0225 08:16:35.853] Network Project: k8s-infra-e2e-boskos-035
I0225 08:16:35.854] Zone: us-west1-b
I0225 08:16:35.854] Dumping logs from master locally to '/workspace/_artifacts'
W0225 08:16:36.629] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0225 08:16:36.629]  - The resource 'projects/k8s-infra-e2e-boskos-035/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found

E0225 08:16:41.007] Command failed
I0225 08:16:41.007] process 552 exited with code 1 after 1.2m
E0225 08:16:41.007] FAIL: pull-kubernetes-node-e2e
I0225 08:16:41.008] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0225 08:16:41.567] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0225 08:16:41.657] process 5730 exited with code 0 after 0.0m

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

Unrelated flake - Will give some time and retest

I0225 08:42:58.644] Detecting project
I0225 08:42:58.644] Project: k8s-infra-e2e-boskos-085
I0225 08:42:58.645] Network Project: k8s-infra-e2e-boskos-085
I0225 08:42:58.645] Zone: us-west1-b
I0225 08:42:58.645] Dumping logs from master locally to '/workspace/_artifacts'
W0225 08:42:59.341] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0225 08:42:59.342]  - The resource 'projects/k8s-infra-e2e-boskos-085/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0225 08:42:59.342] 
W0225 08:42:59.483] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0225 08:42:59.584] Master not detected. Is the cluster up?
I0225 08:42:59.584] Dumping logs from nodes locally to '/workspace/_artifacts'

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

Unrelated flake

W0225 11:10:10.010] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0225 11:10:10.011]  - The resource 'projects/k8s-infra-e2e-boskos-009/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0225 11:10:10.011] 
W0225 11:10:10.211] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0225 11:10:10.312] Master not detected. Is the cluster up?
I0225 11:10:10.312] Dumping logs from nodes locally to '/workspace/_artifacts'

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

/test pull-kubernetes-node-e2e

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

Unrelated flakes

Kubernetes e2e suite: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] expand_more1m1s | Kubernetes e2e suite: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] expand_more | 1m1s
Kubernetes e2e suite: [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] expand_more | 1m1s
Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] expand_more4m46s | Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] expand_more | 4m46s
Kubernetes e2e suite: [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] expand_more | 4m46s
Kubernetes e2e suite: [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 expand_more | Kubernetes e2e suite: [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 expand_more
Kubernetes e2e suite: [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 expand_more

@Riaankl
Copy link
Contributor Author

Riaankl commented Feb 25, 2021

/test pull-kubernetes-e2e-kind

@k8s-ci-robot k8s-ci-robot merged commit 8d42920 into kubernetes:master Feb 25, 2021
conformance-definition automation moved this from PRs Needing Review to Done Feb 25, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.21 milestone Feb 25, 2021
@Riaankl Riaankl moved this from Done to Promotion PRs Needing Two Weeks (flake free) in conformance-definition Feb 25, 2021
@Riaankl Riaankl moved this from Promotion PRs Needing Two Weeks (flake free) to Done in conformance-definition Mar 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/conformance Issues or PRs related to kubernetes conformance tests area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

Successfully merging this pull request may close these issues.

Write ReplicaSet Replace and Patch Test +2 Endpoints
6 participants