Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write AppsV1Deployment resource lifecycle test - +6 endpoint coverage #90916

Closed
7 tasks done
riaankleinhans opened this issue May 9, 2020 · 13 comments · Fixed by #92589, #93458 or #96487
Closed
7 tasks done

Write AppsV1Deployment resource lifecycle test - +6 endpoint coverage #90916

riaankleinhans opened this issue May 9, 2020 · 13 comments · Fixed by #92589, #93458 or #96487
Assignees
Labels
area/conformance Issues or PRs related to kubernetes conformance tests sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/testing Categorizes an issue or PR as relevant to SIG Testing. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@riaankleinhans
Copy link
Contributor

riaankleinhans commented May 9, 2020

Progress [7/7]

This issue is created to allow edit by @Riaankl

/wip
/hold

Identifying an untested feature Using APISnoop

According to this APIsnoop query, there are still some remaining Deployment endpoints which are untested.

SELECT
  operation_id,
  -- k8s_action,
  -- path,
  -- description,
  kind
  FROM untested_stable_endpoints
  where kind like 'Deployment'
  -- WHERE operation_id ilike '%Deployment%'
 ORDER BY kind,operation_id desc
 LIMIT 25
       ;
                operation_id                |    kind    
--------------------------------------------+------------
 replaceAppsV1NamespacedDeploymentStatus    | Deployment
 readAppsV1NamespacedDeploymentStatus       | Deployment
 patchAppsV1NamespacedDeploymentStatus      | Deployment
 patchAppsV1NamespacedDeployment            | Deployment
 listAppsV1DeploymentForAllNamespaces       | Deployment
 deleteAppsV1CollectionNamespacedDeployment | Deployment
(6 rows)

API Reference and feature documentation

The mock test

Test outline

  1. Create a Deployment with a static label
  2. Patch the Deployment with a new Label and updated data
  3. Get the Deployment to ensure it's patched
  4. List all Deployments in all Namespaces find the Deployment(1) ensure that the Deployment is found and is patched
  5. Delete Namespaced Deployment(1) via a Collection with a LabelSelector

Test the functionality in Go

package main

import (
  "encoding/json"
  "fmt"
  "flag"
  "os"
  v1 "k8s.io/api/core/v1"
  appsv1 "k8s.io/api/apps/v1"
  "k8s.io/client-go/dynamic"
  "k8s.io/apimachinery/pkg/runtime"
  "k8s.io/apimachinery/pkg/runtime/schema"
  //"k8s.io/apimachinery/pkg/conversion/unstructured"
  unstructuredv1 "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
  metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  "k8s.io/client-go/kubernetes"
  "k8s.io/apimachinery/pkg/types"
  "k8s.io/client-go/tools/clientcmd"
  watch "k8s.io/apimachinery/pkg/watch"
)

func main() {
  // uses the current context in kubeconfig
  kubeconfig := flag.String("kubeconfig", fmt.Sprintf("%v/%v/%v", os.Getenv("HOME"), ".kube", "config"), "(optional) absolute path to the kubeconfig file")
  flag.Parse()
  config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
  if err != nil {
      fmt.Println(err)
      return
  }
  // make our work easier to find in the audit_event queries
  config.UserAgent = "live-test-writing"
  // creates the clientset
  ClientSet, _ := kubernetes.NewForConfig(config)
  DynamicClientSet, _ := dynamic.NewForConfig(config)
  deploymentResource := schema.GroupVersionResource{Group: "apps", Version: "v1", Resource: "deployments"}

  testDeploymentName := "test-deployment"
  testDeploymentInitialImage := "nginx"
  testDeploymentPatchImage := "alpine"
  testDeploymentUpdateImage := "httpd"
  testDeploymentDefaultReplicas := int32(3)
  testDeploymentMinimumReplicas := int32(1)
  testDeploymentNoReplicas := int32(0)
  testDeploymentLabelSelectors := metav1.LabelSelector{
      MatchLabels: map[string]string{"app": "test-deployment"},
  }
  testNamespaceName := "default"

  fmt.Println("creating a Deployment")
  testDeployment := appsv1.Deployment{
      ObjectMeta: metav1.ObjectMeta{
          Name: testDeploymentName,
          Labels: map[string]string{"test-deployment-static": "true"},
      },
      Spec: appsv1.DeploymentSpec{
          Replicas: &testDeploymentDefaultReplicas,
          Selector: &testDeploymentLabelSelectors,
          Template: v1.PodTemplateSpec{
              ObjectMeta: metav1.ObjectMeta{
                  Labels: testDeploymentLabelSelectors.MatchLabels,
              },
              Spec: v1.PodSpec{
                  Containers: []v1.Container{{
                      Name: testDeploymentName,
                      Image: testDeploymentInitialImage,
                  }},
              },
          },
      },
  }
  _, err = ClientSet.AppsV1().Deployments(testNamespaceName).Create(&testDeployment)
  if err != nil {
      fmt.Println(err)
      return
  }

  fmt.Println("watching for the Deployment to be added")
  dplmtWatchTimeoutSeconds := int64(180)
  dplmtWatch, err := ClientSet.AppsV1().Deployments(testNamespaceName).Watch(metav1.ListOptions{LabelSelector: "test-deployment-static=true", TimeoutSeconds: &dplmtWatchTimeoutSeconds})
  if err != nil {
      fmt.Println(err, "Failed to setup watch on newly created Deployment")
      return
  }

  dplmtWatchChan := dplmtWatch.ResultChan()
  for event := range dplmtWatchChan {
      if event.Type == watch.Added {
          break
      }
  }
  defer func() {
    fmt.Println("deleting the Deployment")
    err = ClientSet.AppsV1().Deployments(testNamespaceName).DeleteCollection(&metav1.DeleteOptions{}, metav1.ListOptions{LabelSelector: "test-deployment-static=true"})
    if err != nil {
      fmt.Println(err)
      return
    }
    for event := range dplmtWatchChan {
      deployment, ok := event.Object.(*appsv1.Deployment)
      if ok != true {
        fmt.Println("unable to convert event.Object type")
        return
      }
      if event.Type == watch.Deleted && deployment.ObjectMeta.Name == testDeploymentName {
        break
      }
    }
  }()
  fmt.Println("waiting for all Replicas to be Ready")
  for event := range dplmtWatchChan {
      deployment, ok := event.Object.(*appsv1.Deployment)
      if ok != true {
          fmt.Println("unable to convert event.Object type")
          return
      }
      if deployment.Status.AvailableReplicas == testDeploymentDefaultReplicas &&
         deployment.Status.ReadyReplicas == testDeploymentDefaultReplicas {
          break
      }
  }

  fmt.Println("patching the Deployment")
  deploymentPatch, err := json.Marshal(map[string]interface{}{
      "metadata": map[string]interface{}{
          "labels": map[string]string{"test-deployment": "patched"},
      },
      "spec": map[string]interface{}{
          "replicas": testDeploymentMinimumReplicas,
          "template": map[string]interface{}{
              "spec": map[string]interface{}{
                  "containers": []map[string]interface{}{{
                      "name": testDeploymentName,
                      "image": testDeploymentPatchImage,
                      "command": []string{"/bin/sleep", "100000"},
                  }},
              },
          },
      },
  })
  if err != nil {
      fmt.Println(err, "failed to Marshal Deployment JSON patch")
      return
  }
  _, err = ClientSet.AppsV1().Deployments(testNamespaceName).Patch(testDeploymentName, types.StrategicMergePatchType, []byte(deploymentPatch))
  if err != nil {
       fmt.Println(err, "failed to patch Deployment")
       return
  }

  for event := range dplmtWatchChan {
      if event.Type == watch.Modified {
          break
      }
  }
  fmt.Println("waiting for Replicas to scale")
  for event := range dplmtWatchChan {
      deployment, ok := event.Object.(*appsv1.Deployment)
      if ok != true {
          fmt.Println("unable to convert event.Object type")
          return
      }
      if deployment.Status.AvailableReplicas == testDeploymentMinimumReplicas &&
         deployment.Status.ReadyReplicas == testDeploymentMinimumReplicas {
          break
      }
  }


  fmt.Println("listing Deployments")
  deploymentsList, err := ClientSet.AppsV1().Deployments("").List(metav1.ListOptions{LabelSelector: "test-deployment-static=true"})
  if err != nil {
      fmt.Println(err, "failed to list Deployments")
      return
  }
  foundDeployment := false
  for _, deploymentItem := range deploymentsList.Items {
      if deploymentItem.ObjectMeta.Name == testDeploymentName &&
         deploymentItem.ObjectMeta.Namespace == testNamespaceName &&
         deploymentItem.ObjectMeta.Labels["test-deployment-static"] == "true" &&
         *deploymentItem.Spec.Replicas == testDeploymentMinimumReplicas &&
         deploymentItem.Spec.Template.Spec.Containers[0].Image == testDeploymentPatchImage {
          foundDeployment = true
          break
      }
  }
  if foundDeployment != true {
      fmt.Println("unable to find the Deployment in list")
      return
  }

  fmt.Println("updating the DeploymentStatus")
  testDeploymentUpdate := testDeployment
  testDeploymentUpdate.ObjectMeta.Labels["test-deployment"] = "updated"
  testDeploymentUpdate.Spec.Template.Spec.Containers[0].Image = testDeploymentUpdateImage
  testDeploymentDefaultReplicasPointer := &testDeploymentDefaultReplicas
  testDeploymentUpdate.Spec.Replicas = testDeploymentDefaultReplicasPointer
  testDeploymentUpdate.Status.ReadyReplicas = testDeploymentNoReplicas
  testDeploymentUpdateUnstructuredMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&testDeploymentUpdate)
  if err != nil {
      fmt.Println(err, "failed to convert to unstructured")
  }
  testDeploymentUpdateUnstructured := unstructuredv1.Unstructured{
      Object: testDeploymentUpdateUnstructuredMap,
  }
	// currently this hasn't been able to hit the endpoint replaceAppsV1NamespacedDeploymentStatus
  _, err = DynamicClientSet.Resource(deploymentResource).Namespace(testNamespaceName).Update(&testDeploymentUpdateUnstructured, metav1.UpdateOptions{})//, "status")
  if err != nil {
      fmt.Println(err, "failed to update the DeploymentStatus")
      return
  }
  for event := range dplmtWatchChan {
      if event.Type == watch.Modified {
          break
      }
  }

  fmt.Println("fetching the DeploymentStatus")
  deploymentGetUnstructured, err := DynamicClientSet.Resource(deploymentResource).Namespace(testNamespaceName).Get(testDeploymentName, metav1.GetOptions{}, "status")
  if err != nil {
      fmt.Println(err, "failed to fetch the Deployment")
      return
  }
  deploymentGet := appsv1.Deployment{}
  err = runtime.DefaultUnstructuredConverter.FromUnstructured(deploymentGetUnstructured.Object, &deploymentGet)
  if err != nil {
      fmt.Println(err, "failed to convert the unstructured response to a Deployment")
      return
  }
  if ! (deploymentGet.Spec.Template.Spec.Containers[0].Image == testDeploymentUpdateImage || deploymentGet.Status.ReadyReplicas == testDeploymentNoReplicas || deploymentGet.ObjectMeta.Labels["test-deployment"] == "updated") {
      fmt.Println("failed to update the Deployment (did not return correct values)")
      return
  }
  for event := range dplmtWatchChan {
      if event.Type == watch.Modified {
          break
      }
  }
  for event := range dplmtWatchChan {
      deployment, ok := event.Object.(*appsv1.Deployment)
      if ok != true {
          fmt.Println("failed to convert event Object to a Deployment")
          return
      }
      if deployment.Status.ReadyReplicas == testDeploymentDefaultReplicas {
          break
      }
  }

  fmt.Println("patching the DeploymentStatus")
  deploymentStatusPatch, err := json.Marshal(map[string]interface{}{
      "metadata": map[string]interface{}{
          "labels": map[string]string{"test-deployment": "patched-status"},
      },
      "status": map[string]interface{}{
          "readyReplicas": testDeploymentNoReplicas,
      },
  })
  if err != nil {
      fmt.Println(err, "failed to Marshal Deployment JSON patch")
      return
  }
  DynamicClientSet.Resource(deploymentResource).Namespace(testNamespaceName).Patch(testDeploymentName, types.StrategicMergePatchType, []byte(deploymentStatusPatch), metav1.PatchOptions{}, "status")

  fmt.Println("fetching the DeploymentStatus")
  deploymentGetUnstructured, err = DynamicClientSet.Resource(deploymentResource).Namespace(testNamespaceName).Get(testDeploymentName, metav1.GetOptions{}, "status")
  if err != nil {
      fmt.Println(err, "failed to fetch the DeploymentStatus")
      return
  }
  deploymentGet = appsv1.Deployment{}
  err = runtime.DefaultUnstructuredConverter.FromUnstructured(deploymentGetUnstructured.Object, &deploymentGet)
  if err != nil {
      fmt.Println(err, "failed to convert the unstructured response to a Deployment")
      return
  }
  if ! (deploymentGet.Spec.Template.Spec.Containers[0].Image == testDeploymentUpdateImage || deploymentGet.Status.ReadyReplicas == 0 || deploymentGet.ObjectMeta.Labels["test-deployment"] == "patched-status") {
      fmt.Println("failed to update the Deployment (did not return correct values)")
      return
  }
  for event := range dplmtWatchChan {
      if event.Type == watch.Modified {
          break
      }
  }
  for event := range dplmtWatchChan {
      deployment, ok := event.Object.(*appsv1.Deployment)
      if ok != true {
          fmt.Println("failed to convert event Object to a Deployment")
          return
      }
      if deployment.Status.ReadyReplicas == testDeploymentDefaultReplicas {
          break
      }
  }

  // write test here
  fmt.Println("[status] complete")
}
creating a Deployment
watching for the Deployment to be added
waiting for all Replicas to be Ready
patching the Deployment
waiting for Replicas to scale
listing Deployments
updating the DeploymentStatus
fetching the DeploymentStatus
patching the DeploymentStatus
fetching the DeploymentStatus
[status] complete
deleting the Deployment

Verifying increase it coverage with APISnoop

Discover useragents:

select distinct useragent from audit_event where bucket='apisnoop' and useragent not like 'kube%' and useragent not like 'coredns%' and useragent not like 'kindnetd%' and useragent like 'live%';
     useragent     
-------------------
 live-test-writing
(1 row)

List endpoints hit by the test:

select * from endpoints_hit_by_new_test where useragent like 'live%'; 
     useragent     |                operation_id                | hit_by_ete | hit_by_new_test 
-------------------+--------------------------------------------+------------+-----------------
 live-test-writing | createAppsV1NamespacedDeployment           | t          |               1
 live-test-writing | deleteAppsV1CollectionNamespacedDeployment | f          |               1
 live-test-writing | listAppsV1DeploymentForAllNamespaces       | f          |               1
 live-test-writing | listAppsV1NamespacedDeployment             | t          |               1
 live-test-writing | patchAppsV1NamespacedDeployment            | f          |               1
 live-test-writing | patchAppsV1NamespacedDeploymentStatus      | f          |               1
 live-test-writing | readAppsV1NamespacedDeploymentStatus       | f          |               2
 live-test-writing | replaceAppsV1NamespacedDeployment          | t          |               1
(8 rows)

Display endpoint coverage change:

select * from projected_change_in_coverage;
   category    | total_endpoints | old_coverage | new_coverage | change_in_number 
---------------+-----------------+--------------+--------------+------------------
 test_coverage |             445 |          181 |          186 |                5
(1 row)

Final notes

If a test with these calls gets merged, test coverage will go up by 5 points

This test is also created with the goal of conformance promotion.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 9, 2020
@riaankleinhans
Copy link
Contributor Author

/sig testing
/sig architecture
/area conformance

@k8s-ci-robot k8s-ci-robot added sig/testing Categorizes an issue or PR as relevant to SIG Testing. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. area/conformance Issues or PRs related to kubernetes conformance tests and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 9, 2020
@riaankleinhans
Copy link
Contributor Author

BobyMcbobs commented in original Issue: Unsure if the endpoint replaceAppsV1NamespacedDeploymentStatus is able to be hit. When this endpoint is attempted to be updated the data after retrieving the update resource doesn't reflect the values in the update

@riaankleinhans
Copy link
Contributor Author

/assign @BobyMCbobs

@BobyMCbobs
Copy link
Member

When using the appv1.Deployment structure instead of maps of interfaces to build the structure in a patch of the Deployment, this error comes up. Still in progress

Deployment.apps "test-deployment" is invalid: [spec.selector: Required value, spec.template.metadata.labels: Invalid value: map[string]string{"test-deplo
yment-static":"true"}: selector does not match template labels, spec.selector: Invalid value: "null": field is immutable]

@riaankleinhans riaankleinhans changed the title Write AppsV1Deployment resource lifecycle test+promote - +5 endpoint coverage Write AppsV1Deployment resource lifecycle test - +5 endpoint coverage Jun 28, 2020
@riaankleinhans riaankleinhans moved this from Sorted Backlog / On hold Issues to In Progress /Active Issues in conformance-definition Jun 29, 2020
@riaankleinhans riaankleinhans changed the title Write AppsV1Deployment resource lifecycle test - +5 endpoint coverage Write AppsV1Deployment resource lifecycle test - +6 endpoint coverage Jun 29, 2020
@riaankleinhans
Copy link
Contributor Author

replaceAppsV1NamespacedDeploymentStatus added to test. Increase cover by 1 Endpoint

conformance-definition automation moved this from In Progress /Active Issues to Done Jul 24, 2020
@riaankleinhans
Copy link
Contributor Author

/reopen
PR #92589 reverted by #93405 due to flaky test

@k8s-ci-robot
Copy link
Contributor

@Riaankl: Reopened this issue.

In response to this:

/reopen
PR #92589 reverted by #93405 due to flaky test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Jul 27, 2020
conformance-definition automation moved this from Done to Issues To Triage Jul 27, 2020
@riaankleinhans riaankleinhans moved this from Issues To Triage to Issue with PR's soaking in conformance-definition Jul 27, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 25, 2020
conformance-definition automation moved this from Issue with PR's soaking to Done Nov 4, 2020
@riaankleinhans
Copy link
Contributor Author

/reopen
Soaking, ready for promotion 19 November 2020

@k8s-ci-robot k8s-ci-robot reopened this Nov 4, 2020
@k8s-ci-robot
Copy link
Contributor

@Riaankl: Reopened this issue.

In response to this:

/reopen
Soaking, ready for promotion 19 November 2020

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

conformance-definition automation moved this from Done to Issues To Triage Nov 4, 2020
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Nov 4, 2020
@riaankleinhans riaankleinhans moved this from Issues To Triage to Issue with PR's soaking in conformance-definition Nov 4, 2020
@riaankleinhans
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 4, 2020
@riaankleinhans
Copy link
Contributor Author

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 4, 2020
@riaankleinhans
Copy link
Contributor Author

/close #96487 merged

conformance-definition automation moved this from Issue with PR's soaking to Done Nov 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/conformance Issues or PRs related to kubernetes conformance tests sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/testing Categorizes an issue or PR as relevant to SIG Testing. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
4 participants