Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
531 lines (404 sloc) 24 KB

Development Guide

Setting up

  1. Fork the odo repository.

  2. Clone your fork:

    Note
    The following commands assume that you have the $GOPATH environment variable properly set. We highly recommend you place odo code into the $GOPATH.
    $ git clone https://github.com/<YOUR_GITHUB_USERNAME>/odo.git $GOPATH/src/github.com/openshift/odo
    $ cd $GOPATH/src/github.com/openshift/odo
    $ git remote add upstream 'https://github.com/openshift/odo'

    When cloning odo, the Windows terminal such as PowerShell or CMD may throw a Filename too long error. To avoid such an error, set your Git configuration as follows:

    $ git config --system core.longpaths true
  3. Install tools used by the build and test system:

    $ make goget-tools

Submitting a pull request(PR)

  1. Create a branch, refer to the guidelines below in the sections below, and create a PR with your changes. If your PR is still in-progress, indicate this with a label or add WIP in your PR title.

    A PR must include:

    • Descriptive context that outlines what has been changed and why

    • A link to the active or open issue it fixes (if applicable)

Useful make targets

bin

(default) go build the executable in cmd/odo

install

build and install odo in your GOPATH

validate

run gofmt, go vet and other validity checks

goget-tools

download tools used to build & test

test

run all unit tests - same as go test pkg/…​

test-integration

run all integration tests

test-coverage

generate test coverage report

Read the Makefile itself for more information.

Reviewing a PR

PR review process

  1. Once you submit a PR, the @openshift-ci-robot automatically requests two reviews from reviewers and suggests an approver based on OWNERS files.

  2. After a reviewer is satisfied with the changes he adds /lgtm (looks good to me) as a comment to the PR. This applies the lgtm label.

  3. The approver then reviews the PR and if satisfied, adds /approve as a comment to the PR. This applies the approve label.

    • After the PR has lgtm and approve labels and the required tests pass, the bot automatically merges the PR.

      Note
      If you are a maintainer and have write access to the odo repository, modify your git configuration so that you do not accidentally push to upstream:
      $ git remote set-url --push upstream no_push

What to look out for when reviewing a pull request:

  • Have tests been added?

  • Does the feature or fix work locally?

  • Is the code understandable, have comments been added to the code?

  • A PR should pass all the pre-submit tests, all request changes must be resolved, and needs at least two approving reviews. If you apply the /lgtm label before it meets this criteria, put it on hold with the /hold label immediately. You can use /lgtm cancel to cancel your /lgtm and use /hold cancel once you are ready to approve it. This especially applies to draft PRs.

  • Approvers can use /approve and /approve cancel to approve or hold their approval respectively.

About Prow

odo uses the Prow infrastucture for CI testing. * It uses OWNERS files to determine who can approve and lgtm a PR. * Prow has two levels of OWNERS, Approvers and Reviewers Approvers look for holistic acceptance criteria, including dependencies with other features, forward and backward compatibility, API and flag definitions, etc. In essence, the high levels of design Reviewers look for general code quality, correctness, sane software engineering, style, etc. In essence, the quality of the actual code itself.

  • Avoid merging the PR manually, unless it is an emergency and you have the required permissions). Prow’s tide component automatically merges the PR once all the conditions are met. It also ensures that post-submit tests (tests that run before merge) validate the PR.

  • Use the command-help to see the list of possible bot commands.

Test Driven Development

We follow the Test Driven Development(TDD) workflow in our development process. You can read more about it here.

Unit tests

Unit tests for odo functions are written using package fake. This allows us to create a fake client, and then mock the API calls defined under OpenShift client-go and k8s client-go.

The tests are written in golang using the pkg/testing package.

Writing unit tests

  1. Identify the APIs used by the function to be tested.

  2. Initialize the fake client along with the relevant client sets. The following example explains the initialization of fake clients and the creation of fake objects.

    The function GetImageStreams in pkg/occlient.go fetches imagestream objects through the API:

    func (c *Client) GetImageStreams(namespace string) ([]imagev1.ImageStream, error) {
            imageStreamList, err := c.imageClient.ImageStreams(namespace).List(metav1.ListOptions{})
            if err != nil {
                    return nil, errors.Wrap(err, "unable to list imagestreams")
            }
            return imageStreamList.Items, nil
    }
    1. For writing the tests, start by initializing the fake client using the function FakeNew() which initializes the image clientset harnessed by GetImageStreams function:

      client, fkclientset := FakeNew()
    2. In the GetImageStreams functions, the list of imagestreams is fetched through the API. While using fake client, this list can be emulated using a PrependReactor interface:

       fkclientset.ImageClientset.PrependReactor("list", "imagestreams", func(action ktesting.Action) (bool, runtime.Object, error) {
               return true, fakeImageStreams(tt.args.name, tt.args.namespace), nil
           })

      The PrependReactor expects resource and verb to be passed in as arguments. Get this information by looking at the List function for fake imagestream:

      func (c *FakeImageStreams) List(opts v1.ListOptions) (result *image_v1.ImageStreamList, err error) {
              obj, err := c.Fake.Invokes(testing.NewListAction(imagestreamsResource, imagestreamsKind, c.ns, opts), &image_v1.ImageStreamList{})
          ...
      }
       func NewListAction(resource schema.GroupVersionResource, kind schema.GroupVersionKind, namespace string, opts interface{}) ListActionImpl {
              action := ListActionImpl{}
              action.Verb = "list"
              action.Resource = resource
              action.Kind = kind
              action.Namespace = namespace
              labelSelector, fieldSelector, _ := ExtractFromListOptions(opts)
              action.ListRestrictions = ListRestrictions{labelSelector, fieldSelector}
               return action
      }

      The List function internally calls NewListAction defined in k8s.io/client-go/testing/actions.go. From these functions, we see that the resource and verb to be passed into the PrependReactor interface are imagestreams and list respectively.

      You can see the entire test function TestGetImageStream in pkg/occlient/occlient_test.go.

      Note
      You can use environment variable CUSTOM_HOMEDIR to specify a custom home directory. It can be used in environments where a user and home directory are not resolvable.
  3. In the case where functions fetch or create new objects through the APIs, add a reactor interface returning fake objects.

  4. Verify the objects returned.

Note
Refer https://github.com/golang/go/wiki/LearnTesting for Go best practices on unit testing.

Integration and e2e tests

Prerequisites:

  • A minishift or OpenShift environment with Service Catalog enabled:

    $ MINISHIFT_ENABLE_EXPERIMENTAL=y minishift start --extra-clusterup-flags "--enable=*,service-catalog,automation-service-broker,template-service-broker"
  • odo and oc binaries in $PATH.

Integration tests:

Integration tests utilize Ginkgo and it’s preferred matcher library Gomega which define sets of test cases (spec). As per ginkgo test file comprises specs and these test file are controlled by test suite. Test and test suite files are located in tests/integration directory and can be called using .PHONY within makefile. Integration tests validates and focuses on specific fields of odo functionality or individual commands. For example, cmd_app_test.go or generic_test.go.

E2e tests:

E2e (End to end) uses the same library as integration test. E2e tests and test suite files are located in tests/e2escenarios directory and can be called using .PHONY within makefile. Basically end to end (e2e) test contains user specific scenario that is combination of some features/commands in a single test file.

How to write:

Refer to the odo clean test template.

Test guidelines:

Please follow certain protocol before contributing to odo tests. This helps in how to contribute in odo tests.

  • Before writing tests (Integration/e2e) scenario make sure that the test scenario (Integration or e2e) is identified properly.

    For example:
    In storage feature test, storage command will be tested properly includes positive, negative and corner cases whereas in e2e scenario only one or two storage command will be tested in e2e scenario like `create component -> link -> add storage -> certain operation -> delete storage -> unlink -> delete component`.
  • Create a new test file for a new feature and make sure that the feature file name should add proper sense. If the feature test file is already present then update the same test file with new scenario.

    For example:
    For storage feature, a new storage test file is created. If new functionality is added to the storage feature then same file will be updated with new scenario. Naming of the test file should follow a common format like `cmd_<feature name>_test`. So the storage feature test file name will be `cmd_storage_test.go`. Same naming convention can be used for e2e test like `e2e_<release name>_test` or `e2e_<full scenario name>_test`.
  • Test description should make sense of what it implements in the specs. Use proper test description in Describe block

    For example:
    For storage feature, the appropriate test description would be `odo storage command tests`.
    
    var _ = Describe("odo storage command tests", func() {
        [...]
    })
  • For a better understanding of what a spec does, use proper description in Context and it block

    For example:
    Context("when running help for storage command", func() {
    	It("should display the help", func() {
    		[...]
    	})
    })
  • Due to parallel test run support make sure that the should run in isolation, otherwise the test result will lead to race condition. To achieve this ginkgo provides some in build functions BeforeEach, AfterEach etc.

    For example:
    var _ = Describe("odo generic", func() {
        var project string
    	var context string
    	var oc helper.OcRunner
        BeforeEach(func() {
    	    oc = helper.NewOcRunner("oc")
    	    SetDefaultEventuallyTimeout(10 * time.Minute)
    	    context = helper.CreateNewContext()
        })
        AfterEach(func() {
    	    os.RemoveAll(context)
        })
        Context("deploying a component with a specific image name", func() {
            JustBeforeEach(func() {
                os.Setenv("GLOBALODOCONFIG", filepath.Join(context, "config.yaml"))
                project = helper.CreateRandProject()
            })
    
            JustAfterEach(func() {
                helper.DeleteProject(project)
                os.Unsetenv("GLOBALODOCONFIG")
            })
            It("should deploy the component", func() {
                helper.CmdShouldPass("git", "clone", "https://github.com/openshift/nodejs-ex", context+"/nodejs-ex")
                helper.CmdShouldPass("odo", "create", "nodejs:latest", "testversioncmp", "--project", project, "--context", context+"/nodejs-ex")
                helper.CmdShouldPass("odo", "push", "--context", context+"/nodejs-ex")
                helper.CmdShouldPass("odo", "delete", "-f", "--context", context+"/nodejs-ex")
            })
        })
    })
  • Don’t create new test file for issues(bug) and try to add some scenario for each bug fix if applicable

  • Don’t use unnecessary text validation in Expect of certain command output. Only validation of key text specific to that scenario would be enough.

    For example:
    While running multiple push on same component without changing any source file.
    
    helper.CmdShouldPass("odo", "push", "--show-log", "--context", context+"/nodejs-ex")
    output := helper.CmdShouldPass("odo", "push", "--show-log", "--context", context+"/nodejs-ex")
    Expect(output).To(ContainSubstring("No file changes detected, skipping build"))
  • If oc, odo or generic library you are looking for is not present in helper package then create a new library function as per the scenario requirement.

  • The test spec should run in parallel (Default) or sequentially as per choice. Check test template for reference.

  • Run tests on local env before pushing PRs

Test variables:

There are some test environment variable that helps to get more control over the test run and it’s results

  • TEST_EXEC_NODES: Env variable TEST_EXEC_NODES is used to pass spec execution type (parallel or sequential) for ginkgo tests. To run the specs sequentially use TEST_EXEC_NODES=1, otherwise by default the specs are run in parallel on 2 ginkgo test node. Any TEST_EXEC_NODES value greater than one runs the spec in parallel on the same number of ginkgo test nodes.

  • SLOW_SPEC_THRESHOLD: Env variable SLOW_SPEC_THRESHOLD is used for ginkgo tests. After this time (in second), ginkgo marks test as slow. The default value is set to 120s.

  • GINKGO_VERBOSE_MODE: Env variable GINKGO_VERBOSE_MODE is used to get control over enabling ginkgo verbose mode against each test target run. By default ginkgo verbosity is not enabled. To enable verbosity export or set env GINKGO_VERBOSE_MODE like GINKGO_VERBOSE_MODE=-v.

Running integration tests:

By default, tests are run against the odo binary placed in the PATH which is created by command make. Integration tests can be run in two (parallel and sequential) ways. To control the parallel run use environment variable TEST_EXEC_NODES. For example component test can be run

  • To run the test in parallel, on a test cluster (By default the test will run in parallel on two ginkgo test node):

    Run component command integration tests

    $ make test-cmp-e2e
  • To run the component command integration tests sequentially or on single ginkgo test node:

    Run component command integration tests

    $ TEST_EXEC_NODES=1 make test-cmd-cmp
Note
To see the number of available integration test file for validation, press tab just after writing make test-cmd-. However there is a test file generic_test.go which handles certain test spec easily and can run the spec in parallel by calling make test-generic. By calling make test-integration, the whole suite can run all the spec in parallel on two ginkgo test node except service and link irrespective of service catalog status in the cluster. However make test-integration-service-catalog runs all spec of service and link tests successfully in parallel on cluster having service catalog enabled. make test-odo-login-e2e doesn’t honour environment variable TEST_EXEC_NODES. So by default it runs login and logout command integration test suite on a single ginkgo test node sequentially to avoid race conditions in a parallel run.

Running e2e tests:

(E2e) End to end test run behaves in the similar way like integration test does. To see the number of available e2e test file for execution, press tab just after writing make test-e2e-. For e2e suite level execution of all e2e test spec use make test-e2e-all. For example

  • To run the java e2e test in parallel, on a test cluster (By default the component test will run in parallel on two ginkgo test node):

    $ make test-e2e-java
  • To run the java e2e test sequentially or on single ginkgo test node:

    $ TEST_EXEC_NODES=1 make test-e2e-java

Race conditions

Test failures during the execution of the integration tests do occur. For example, the following error has been encountered multiple times:

Operation cannot be fulfilled on deploymentconfigs.apps.openshift.io "component-app": the object has been modified; please apply your changes to the latest version and try again

The reason this happens is because the read DeploymentConfig or update DC in memory or call Update actions can potentially fail due to the DC being updated concurrently by some other component, usually by Kubernetes or OpenShift itself.

Thus it is recommended to avoid the read, update-in-memory, or push-update actions as much as possible. One remedy is to use the Patch operation, for more information see the Resource Operations section. Another remedy would be to retry the operation when the optimistic concurrency error is encountered.

Setting custom Init Container image for bootstrapping Supervisord

For quick deployment of components, odo uses the Supervisord process manager. Supervisord is deployed via Init Container image.

ODO_BOOTSTRAPPER_IMAGE is an environmental variable which specifies the Init Container image used for Supervisord deployment. You can modify the value of the variable to use a custom Init Container image. The default Init Container image is quay.io/openshiftdo/init

  1. To set a custom Init Container image, run:

    ODO_BOOTSTRAPPER_IMAGE=quay.io/myrepo/myimage:test
  2. To revert back to the default Init Container image, unset the variable:

    unset ODO_BOOTSTRAPPER_IMAGE

Dependency management

odo uses glide to manage dependencies. glide is not strictly required for building odo but it is required when managing dependencies under the vendor/ directory.

If you want to make changes to dependencies please make sure that glide is installed and is in your $PATH.

Installing glide

  1. Download glide:

    $ go get -u github.com/Masterminds/glide
  2. Check that glide is working

    $ glide --version

Using glide to add a new dependency

Adding a new dependency

  1. Update the glide.yaml file. Add the new package or sub-packages to the glide.yaml file. You can add a whole new package as a dependency or just a few sub-packages.

  2. Run glide update --strip-vendor to get the new dependencies.

  3. Commit the updated glide.yaml, glide.lock and vendor files to git.

Updating dependencies

  1. Set new package version in glide.yaml file.

  2. Run glide update --strip-vendor to update dependencies

Release guide

Releasing a new version

Making artifacts for a new release is automated. When a new git tag is created, the Travis-ci deploy job automatically builds binaries and uploads it to the GitHub release page.

To release a new version:

  1. Create a PR that:

There is a helper script scripts/bump-version.sh that changes version number in all the files listed above (except odo.rb).

  • Updates the CLI reference documentation in the docs/cli-reference.md file:

    $ make generate-cli-reference
    1. Merge the above PR.

    2. Once the PR is merged create and push the new git tag for the version.

      $ git tag v0.0.1
      $ git push upstream v0.0.1

      Or create the new release using the GitHub site (this must be a proper release and not a draft).

      Note
      Do not upload any binaries for the release. When the new tag is created, Travis-CI starts a special deploy job. This job builds the binaries automatically (using make prepare-release) and then uploads it to the GitHub release page. When the job finishes you should see the binaries on the GitHub release page. The release is now marked as a draft.
    3. Update the descriptions and publish the release.

    4. Verify that packages have been uploaded to the rpm and deb repositories.

    5. Update the Homebrew package:

      1. Check commit id for the released tag git show-ref v0.0.1

      2. Create a PR to update :tag and :revision in the odo.rb file in kadel/homebrew-odo.

    6. Confirm the binaries are available in the GitHub release page.

    7. Create a PR and update the file build/VERSION with the latest version number.

Writing machine readable output code

Here are some tips to consider when writing machine-readable output code.

  • Match similar Kubernetes / OpenShift API structures

  • Put as much information as possible within Spec

  • Use json:"foobar" within structs to rename the variables

Within odo, we unmarshal all information from a struct to json. Within this struct, we use TypeMeta and ObjectMeta in order to supply meta-data information coming from Kubernetes / OpenShift.

Below is working example of how we would implement a "HelloWorld" struct.

  package main

  import (
    "encoding/json"
    "fmt"

    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
  )

  // Create the struct. Here we use TypeMeta and ObjectMeta
  // as require to create a "Kubernetes-like" API.
  type GenericSuccess struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`
    Message           string `json:"message"`
  }

  func main() {

    // Create the actual struct that we will use
    // you will see that we supply a "Kind" and
    // APIVersion. Name your "Kind" to what you are implementing
    machineOutput := GenericSuccess{
      TypeMeta: metav1.TypeMeta{
        Kind:       "HelloWorldExample",
        APIVersion: "odo.openshift.io/v1alpha1",
      },
      ObjectMeta: metav1.ObjectMeta{
        Name: "MyProject",
      },
      Message: "Hello API!",
    }

    // We then marshal the output and print it out
    printableOutput, _ := json.Marshal(machineOutput)
    fmt.Println(printableOutput)
  }

odo-bot

odo-bot is the GitHub user that provides automation for certain tasks in odo.

It uses the .travis.yml script to upload binaries to the GitHub release page using the deploy-github-release personal access token.

Licenses

odo uses wwhrd to check license compatibility of vendor packages. The configuration for wwhrd is stored in .wwhrd.yml.

The whitelist section is for licenses that are always allowed. The blacklist section is for licenses that are never allowed and will always fail a build. Any licenses that are not explicitly mentioned come under the exceptions secion and need to be explicitly allowed by adding the import path to the exceptions.

More details about the license compatibility check tool can be found here

You can’t perform that action at this time.