Skip to content

Dev: Writing and running tests

Armel Soro edited this page Jun 14, 2022 · 2 revisions

Test CI Links

Setting up test environment

Requires Go 1.17 and Ginkgo 1.16.

Testing happens with the above versions. Developers are advised to stick to this version if they can but there is no compulsion on Go version.

We use unit, integration and e2e (End to end) tests. Run make goget-tools target to set up the integration test environment. Unit tests do not require any precondition.

Test variables:

There are some test environment variable that helps to get more control over the test run and it's results

  • TEST_EXEC_NODES: Env variable TEST_EXEC_NODES is used to pass spec execution type (parallel or sequential) for ginkgo tests. To run the specs sequentially use TEST_EXEC_NODES=1, otherwise by default the specs are run in parallel on 4 ginkgo test node. Any TEST_EXEC_NODES value greater than one runs the spec in parallel on the same number of ginkgo test nodes.

  • SLOW_SPEC_THRESHOLD: Env variable SLOW_SPEC_THRESHOLD is used for ginkgo tests. After this time (in second), ginkgo marks test as slow. The default value is set to 120s.

  • GINKGO_TEST_ARGS: Env variable GINKGO_TEST_ARGS is used to get control over enabling test flags against each test target run. For example, To enable verbosity export or set env GINKGO_TEST_ARGS like GINKGO_TEST_ARGS=-v.

  • UNIT_TEST_ARGS: Env variable UNIT_TEST_ARGS is used to get control over enabling test flags along with go test. For example, To enable verbosity export or set env UNIT_TEST_ARGS like UNIT_TEST_ARGS=-v.

Setting up test environment for integration and e2e tests

  • OpenShift: To run the tests on a 4.x cluster, run make configure-installer-tests-cluster which performs the login operation required to run the test. By default, the tests are run against the odo binary placed in the $PATH which is created by the command make.

    Make sure that odo and oc binaries are in $PATH. Use the cloned odo directory to launch tests on 4.* clusters.

  • Kubernetes: To run the tests on Kubernetes cluster, set the KUBERNETES environment variable:

     export KUBERNETES=true

    To communicate with Kubernetes cluster use kubectl.

Similarly, a 4.x cluster needs to be configured before launching the tests against it. The files kubeadmin-password and kubeconfig which contain cluster login details should be present in the auth directory, and it should reside in the same directory as Makefile. If it is not present in the auth directory, please create it, then run make configure-installer-tests-cluster to configure the 4.* cluster.

For ppc64le arch, run make configure-installer-tests-cluster-ppc64le to configure the test environment.

For s390x arch, run make configure-installer-tests-cluster-s390x to configure the test environment.

Guidelines for writing tests

See the Testing section in the Coding Conventions page: https://github.com/redhat-developer/odo/wiki/Dev:-Coding-Conventions#testing

Unit tests

Unit tests for odo functions are written using package fake. This allows us to create a fake client, and then mock the API calls defined under OpenShift client-go and k8s client-go.

The tests are written in golang using the pkg/testing package. Run make test to validate unit tests.

Integration tests

Integration tests utilize Ginkgo and its preferred matcher library Gomega which define sets of test cases (spec). As per ginkgo test file comprises specs and these test file are controlled by test suite.

Test and test suite files are located in tests/integration directory and can be called using make test-integration.

Integration tests validate and focus on specific fields of odo functionality or individual commands. For example, cmd_app_test.go or generic_test.go.

By default, the integration tests for the devfile feature run against a kubernetes cluster.

Running integration tests

Integration tests can be run in two ways, parallel and sequential. By default, the test will run in parallel on 4 ginkgo test node.

  • Parallel Run: To run the component command integration tests in parallel, on a test cluster:

    make test-cmp-e2e

    To control the parallel run, use the environment variable TEST_EXEC_NODES.

  • Sequential Run: To run the component command integration tests sequentially or on single ginkgo test node:

    TEST_EXEC_NODES=1 make test-cmd-cmp

    make test-cmd-login-logout doesn't honour environment variable TEST_EXEC_NODES. By default, login and logout command integration test suites are run on a single ginkgo test node sequentially to avoid race conditions during a parallel run.

To see the number of available integration test files for validation, press tab just after writing make test-cmd-. However, there is a test file generic_test.go which handles certain test specs easily, and we can run it in parallel by calling make test-generic. By calling make test-integration, the whole suite will run all the specs in parallel on 4 ginkgo test nodes except service and link.

To run ONE individual test, you can either:

  • Supply the name via command-line:
    ginkgo -focus="When executing catalog list without component directory" tests/integration/
  • Modify the It statement to FIt and run:
    ginkgo tests/integration/

If you are running operatorhub tests, then you need to install certain operators on the cluster, which can be installed by running setup-operator.sh.

E2e tests

E2e (End to end) uses the same library as integration test. E2e tests and test suite files are located in tests/e2escenarios directory and can be called using .PHONY within makefile. Basically end to end (e2e) test contains user specific scenario that is combination of some features/commands in a single test file.

Running E2e tests:

End-to-end(E2e) test run behaves in the similar way like integration test does. To see the number of available e2e test file for execution, press tab just after writing make test-e2e-. For e2e suite level execution of all e2e test spec use make test-e2e-all.

Writing Tests

Refer to the odo clean test template.

Test guidelines:

Please follow certain protocol before contributing to odo tests. This helps in contributing to odo tests. For better understanding of writing test please refer Ginkgo and it's preferred matcher library Gomega.

  • Before writing tests (Integration/e2e) scenario make sure that the test scenario (Integration or e2e) is identified properly.

    For example: For storage feature, storage command will be tested properly includes positive, negative and corner cases whereas in e2e scenario only one or two storage command will be tested in e2e scenario like: create component -> link -> add storage -> certain operation -> delete storage -> unlink -> delete component.

  • Create a new test file for a new feature and make sure that the feature file name should add proper sense. If the feature test file is already present then update the same test file with new scenario.

    For example: For storage feature, a new storage test file is created. If a new functionality is added to the storage feature then same file will be updated with new scenario. Naming of the test file should follow a common format like cmd_<feature name>_test. So the storage feature test file name will be cmd_storage_test.go. Same naming convention can be used for e2e test like e2e_<release name>_test or e2e_<full scenario name>_test.

  • Test description should make sense of what it implements in the specs. Use proper test description in Describe block+

    For example: For storage feature, the appropriate test description would be odo storage command tests.

    var _ = Describe("odo storage command tests", func() {
        [...]
    })
  • For a better understanding of what a spec does, use proper description in Context and it block

    For example:

    Context("when running help for storage command", func() {
      It("should display the help", func() {
        [...]
      })
    })
  • Do not create a new test spec for the steps which can be run with the existing specs.

  • Spec level conditions, pre, and post requirements should be run in ginkgo built-in tear down steps JustBeforeEach and JustAfterEach

  • Due to parallel test run support make sure that the spec should run in isolation, otherwise the test result will lead to race condition. To achieve this ginkgo provides some in build functions BeforeEach, AfterEach etc.

    For example:

    var _ = Describe("odo generic", func() {
      var project string
      var context string
      var oc helper.OcRunner
        BeforeEach(func() {
          oc = helper.NewOcRunner("oc")
          SetDefaultEventuallyTimeout(10 * time.Minute)
          context = helper.CreateNewContext()
        })
        AfterEach(func() {
          os.RemoveAll(context)
        })
        Context("deploying a component with a specific image name", func() {
            JustBeforeEach(func() {
                os.Setenv("GLOBALODOCONFIG", filepath.Join(context, "config.yaml"))
                project = helper.CreateRandProject()
            })
      
            JustAfterEach(func() {
                helper.DeleteProject(project)
                os.Unsetenv("GLOBALODOCONFIG")
            })
            It("should deploy the component", func() {
                helper.CopyExample(filepath.Join("source", "nodejs"), context)
                helper.Cmd("odo", "create", "nodejs:latest", "testversioncmp", "--project", project, "--context", context).ShouldPass()
                helper.Cmd("odo", "push", "--context", context).ShouldPass()
                helper.Cmd("odo", "delete", "-f", "--context", context).ShouldPass()
            })
        })
    })
  • Don’t create new test file for issues(bug) and try to add some scenario for each bug fix if applicable

  • Don’t use unnecessary text validation in Expect of certain command output. Only validation of key text specific to that scenario would be enough.

    For example: While running multiple push on same component without changing any source file.

    helper.Cmd("odo", "push", "--show-log", "--context", context+"/nodejs-ex")
    output := helper.Cmd("odo", "push", "--show-log", "--context", context+"/nodejs-ex").ShouldPass().Out()
    Expect(output).To(ContainSubstring("No file changes detected, skipping build"))
  • If oc, odo or generic library you are looking for is not present in helper package then create a new library function as per the scenario requirement. Avoid unnecessary function implementation within test files. Check to see if there is a helper function already implemented.

  • If you are looking for delay with a specific feature test, don't use hard time.Sleep() function. Yes, you can use but as a polling interval of maximum duration. Check the helper package for more such reference.

    For example:

    func RetryInterval(maxRetry, intervalSeconds int, program string, args ...string) string {
      for i := 0; i < maxRetry; i++ {
        session := CmdRunner(program, args...)
        session.Wait()
        if session.ExitCode() == 0 {
          time.Sleep(time.Duration(intervalSeconds) * time.Second)
        } else {
          Consistently(session).ShouldNot(gexec.Exit(0), runningCmd(session.Command))
          return string(session.Err.Contents())
        }
      }
      Fail(fmt.Sprintf("Failed after %d retries", maxRetry))
      return ""
    }

    There is also an in-built timeout feature available in Ginkgo.

  • The test spec should run in parallel (Default) or sequentially as per choice. Check test template for reference.

  • Run tests on local environment before pushing PRs.


Running PR test job on PSI

PSI contains an Openshift cluster running behind firewall, we are using prow to create request for PRs, we are running rabbitmq on a public cloud to access queue for creating jobs with internal jenkins(behind a firewall). Prow uses ci-firewall within scripts/openshiftci-presubmit-all-tests.sh to create request to rabbitmq. ci-firewall creates the following json message and passes it to the rabbitmq send queue as an env variable.

CI_MESSAGE='{"repourl": "repourl", "kind": "PR", "target": "target", "setupscript": "setupscript", "runscript": "runscript", "rcvident": "rcvident", "runscripturl": "http://url", "mainbranch": "master"}'

For every message in send queue a build is triggered using a jenkins robot account, jenkins then executes the build script to start the test on the node provided in SSHNodeFile(json contains information related to node), SSHNodeFile can contain multiple node information. CI-firewall then executes the test and send back logs for tests using a receive queue.

Jenkins build script

rm -rf ./*
curl -kJLO https://github.com/mohammedzee1000/ci-firewall/releases/download/${CI_FIREWALL_VERSION}/ci-firewall-linux-amd64.tar.gz
tar -xzf ./ci-firewall-linux-amd64.tar.gz && rm -rf ./ci-firewall-linux-amd64.tar.gz && chmod +x ./ci-firewall
curl -kJLO  <SSHNodeFile>/jenkins-nodes.json
curl -kJLO <kube-password>
NDFILE="$(pwd)/jenkins-nodes.json"
KUBEADMIN_PASSWORD_FILE="$(pwd)/kube-password"
./ ci-firewall work --sshnodesfile ${NDFILE} --env "OCP4X_API_URL=https://<url-to-ocp-cluster>" --env "OCP4X_KUBEADMIN_PASSWORD=$(cat ${KUBEADMIN_PASSWORD_FILE})" --env "CI=openshift"
rm -rf ./*

SSHNodeFile

{
    "nodes": [{
          "name": "common name of node. example -Fedora 31-",
          "user": "username to ssh into the node with",
          "address": "The address of the node, like an ip or domain name without port",
          "port": 22,
          "baseos": "linux|windows|mac",
          "arch": "arch of the system eg amd64",
          "password": "not recommended but you can provide password of target node",
          "privatekey": "Optional again but either this or password MUST be given.",
          "tags": ["optional sample tags to append to logs from ssh node. Node `name is already attached as `ssh:name`"]
  }]
}

Running integration tests on Prow

Prow is the Kubernetes or OpenShift way of managing workflow, including tests. Integration and periodic test targets for odo are passed through the script scripts/openshiftci-presubmit-all-tests.sh and scripts/openshiftci-periodic-tests.sh respectively available in the odo repository. Prow uses the script through the command attribute of the odo job configuration file in openshift/release repository.

For running integration test on 4.x cluster, job configuration file will be as follows:

- as: integration-e2e
steps:
  cluster_profile: aws
  test:
  - as: integration-e2e-steps
    commands: scripts/openshiftci-presubmit-all-tests.sh
    credentials:
    - mount_path: /usr/local/ci-secrets/odo-rabbitmq
      name: odo-rabbitmq
      namespace: test-credentials
    env:
    - default: /usr/local/ci-secrets/odo-rabbitmq/amqpuri
      name: ODO_RABBITMQ_AMQP_URL
    from: oc-bin-image
    resources:
      requests:
        cpu: "2"
        memory: 6Gi
  workflow: ipi-aws

Similarly, for running periodic test on 4.x cluster, job configuration file will be as follows:

- as: integration-e2e-periodic
cron: 0 */6 * * *
steps:
  cluster_profile: aws
  test:
  - as: integration-e2e-periodic-steps
    commands: scripts/openshiftci-periodic-tests.sh
    from: oc-bin-image
    resources:
      requests:
        cpu: "2"
        memory: 6Gi
  workflow: ipi-aws

To generate the odo job file, run make jobs in openshift/release for the odo pr and periodic tests.

MISC: Odo test platform deploys

GitHub Action to deploy OCP 4.7 and Kubernetes clusters on IBM Cloud: https://github.com/feloy/odo/pull/2

Test GH Actions
Unit tests / Linux https://github.com/anandrkskd/odo/pull/1
Unit tests / Windows https://github.com/anandrkskd/odo/pull/1
Unit tests / macOS https://github.com/anandrkskd/odo/pull/1
Integration tests / Linux / OCP 4.7 https://github.com/feloy/odo/pull/1
Integration tests / Linux / k8s 1.20 https://github.com/feloy/odo/pull/1
Integration tests / Windows / OCP 4.7 https://github.com/feloy/odo/pull/1
Integration tests / macOS / OCP 4.7 no