This repository holds the non-kubernetes, end-to-end tests that need to pass on a running cluster before PRs merge and/or before we ship a release. These tests are based on ginkgo and the kubernetes e2e test framework.
- Git installed. See Installing Git.
- Golang installed. See Installing Golang, the newer the better.
- Ensure you install Golang from a binary release found here, not with a package manager such as
dnf
.
- Ensure you install Golang from a binary release found here, not with a package manager such as
- Have the environment variable
KUBECONFIG
set pointing to your cluster.
If you want to run the Public openshift-tests test cases together, you can include the coresponding package here. For example, you can include the Public extended/operators test cases, and then run the $ make update-public
command to get it, or you can run $ make all
command.
If you create a new folder for your test case, please add the path to the include.go.
If you have some new YAML files used in your code, you have to generate the bindata first.
Run make update
to update the bindata. For example, you can see the bindata has been updated after running the make update
as follows:
$ git status
modified: test/extended/testdata/bindata.go
new file: test/extended/testdata/olm/etcd-subscription-manual.yaml
Note that we use the go module
for package management, the previous go path
is deprecated.
$ git clone git@github.com:openshift/openshift-tests-private.git
$ cd openshift-tests-private/
$ make build
mkdir -p "bin"
export GO111MODULE="on" && export GOFLAGS="" && go build -o "bin" "./cmd/extended-platform-tests"
$ ls -hl ./bin/extended-platform-tests
-rwxrwxr-x. 1 cloud-user cloud-user 165M Jun 24 22:17 ./bin/extended-platform-tests
Below are the general steps for submitting a PR. First, you should Fork this repo to your own Github account.
$ git remote add <Your Name> git@github.com:<Your Github Account>/openshift-tests-private.git
$ git pull origin master
$ git checkout -b <Branch Name>
$ git add xxx
$ make build
$ ./bin/extended-platform-tests run all --dry-run |grep <Test Case ID>|./bin/extended-platform-tests run -f -
$ git commit -m "xxx"
$ git push <Your Name> <Branch Name>:<Branch Name>
And then there will be a prompt in your Github repo console to open a PR, click it to do so.
The binary finds the test case via searching for the test case title. It searches the test case titles by RE (Regular Expression
). So, you can filter your test cases by using grep
. Such as, if I want to run all OLM test cases, and all of them contain the OLM
letter, I can use the grep OLM
to filter them, as follows:
$ ./bin/extended-platform-tests run all --dry-run | grep "OLM" | ./bin/extended-platform-tests run -f -
I0624 22:48:36.599578 2404223 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
"[sig-operators] OLM for an end user handle common object Author:kuiwang-Medium-22259-marketplace operator CR status on a running cluster [Exclusive] [Serial]"
...
You can save the above output to a file and run it:
$ ./bin/extended-platform-tests run -f <your file path/name>
If you want to run a test case, such as g.It("Author:jiazha-Critical-23440-can subscribe to the etcd operator [Serial]"
, since the TestCaseID
is unique, you can do:
$ ./bin/extended-platform-tests run all --dry-run|grep "23440"|./bin/extended-platform-tests run --junit-dir=./ -f -
Sometime, we want to keep the generated namespace for debugging. Just add the Env Var: export DELETE_NAMESPACE=false
. These random namespaces will be kept, like below:
...
Dec 18 09:39:33.448: INFO: Running AfterSuite actions on all nodes
Dec 18 09:39:33.448: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
Dec 18 09:39:33.511: INFO: Found DeleteNamespace=false, skipping namespace deletion!
Dec 18 09:39:33.511: INFO: Running AfterSuite actions on node 1
...
1 pass, 0 skip (2m50s)
[root@preserve-olm-env openshift-tests-private]# oc get ns
NAME STATUS AGE
default Active 4h46m
e2e-test-olm-a-a92jyymd-lmgj6 Active 4m28s
e2e-test-olm-a-a92jyymd-pr8hx Active 4m29s
...
You will get the below error when running the test cases on GCP platform.
E0628 22:11:41.236497 25735 test_context.go:447] Failed to setup provider config for "gce": Error building GCE/GKE provider: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
You need to export
the below environment variable before running test on GCP.
$ export GOOGLE_APPLICATION_CREDENTIALS=<path to your gce credential>
Or you also could take ginkgo-test job to execute your case.
You may get 400 Bad Request
error even if you have export
the above values. This error means it's time to update the SA.
E0628 22:18:22.290137 26212 gce.go:876] error fetching initial token: oauth2: cannot fetch token: 400 Bad Request
Response: {"error":"invalid_grant","error_description":"Invalid JWT Signature."}
You can update the SA by following this authentication as follows, or you can raise an issue here.
- Click the apis
- From the
Service account
list, select New service account. - In the
Service account
name field, enter a name. - Click
Create
. A JSON file that contains your key downloads to your computer.
In order to execute case on the cluster built on Azure platform, you have to configure the AZURE_AUTH_LOCATION
env variable which includes Azure subscriptionId, clientId, and clientSecret etc. You can get the config/credentials/azure.json
from the private repo cucushift-internal
.
Note that if you cannot get the Azure secret successfully, you can still debug/run your test cases via the Jenkins job.
export AZURE_AUTH_LOCATION=<path to azure.json>
- Add your ssh key to https://code.engineering.redhat.com/gerrit/#/settings/ssh-keys
- Clone the repo: ssh://@code.engineering.redhat.com:22/cucushift-internal, for example,
[root@preserve-olm-env data]# git clone ssh://jiazha@code.engineering.redhat.com:22/cucushift-internal
Cloning into 'cucushift-internal'...
remote: Total 1367 (delta 0), reused 1367 (delta 0)
Receiving objects: 100% (1367/1367), 263.87 KiB | 0 bytes/s, done.
Resolving deltas: 100% (516/516), done.
[root@preserve-olm-env data]# cd cucushift-internal/
[root@preserve-olm-env cucushift-internal]# ls config/credentials/
azure.json crw gce.json micro_eng openshift-qe-regional_v4.json ssp
ccx-qe deprecated.openshift-qe-gce_v4.json gce-ocf.json msg-client-aos-automation.pem openshift-qe-shared-vpc_v4.json vmc.json
cfme dockerhub gce_v4.json openshift-qe-gce_v4.json perf-eng
You can use the ginkgo-test job to run your test case with your repo. As follows:
Here are the parameters:
- SCENARIO: input your case ID
- FLEXY_BUILD: the Launch Environment Flexy build ID to build the cluster you use
- TIERN_REPO_OWNER: your GitHub account
- TIERN_REPO_BRANCH: your branch for the debug case code
- JENKINS_SLAVE: gocxx, xx is your cluster release version, for example, goc47 for 4.7 cluster
- For other parameters, please take default value.
Here are the procedures:
- Push the case code into your repo with your branch. For example, example-branch
- Launch build with parameters. For example, push the code of case which ID is
12345
into example-branch, and your Flexy job is6789
with using 4.7 release. After you push code to your repo, you could launch aginkgo-test
job, as follows:
- SCENARIO: 12345
- FLEXY_BUILD: 6789
- TIERN_REPO_OWNER: exampleaccount
- TIERN_REPO_BRANCH: examplebranch
- JENKINS_SLAVE: goc47
For more details on writing tests for the extended test suite, see the extended test suite README
For more details on writing tests for the Console, see the Console tests README