Integration tests provide end-to-end testing of ICP.
While unit tests verify the code is working as expected by relying on mocks and artificially created fixtures, integration tests actually use real ICP environment through the kubectl CLI.
Note that integration tests do NOT replace unit tests.
As a rule of thumb, code should be tested thoroughly with unit tests. Integration tests on the other hand are meant to test a specific feature end to end.
Integration tests are written in bash using the bats framework.
$ git clone https://github.com/ibm-cloud-architecture/icp-validation-tests.git
$ cd icp-validation/
$ ./run.sh
The following are are used and required by these scripts
- bash
- jq
- grep
- awk
- git
- bats-core
- brew install jq
- brew install bash
- brew install coreutils
- brew install bats
- brew install git
There are several variables you can set to modify the test suite
NAMESPACEsets the namespace to create test pods in. This namespace will be created and deleted by the test framework. Defaults toivtif not setSERVERsets theIPorHostnameof the cluster access (Master IP, VIP or LoadBalancer). IfSERVERis not set, an attempt will be made to extract this fromkubectlif the user is already authenticated.KUBECTLsets the path ofkubectlcommand. Defaults to any availablekubectlin path if not set. If nokubectlis availablekubectlwill be downloaded from the ICP cluster and placed in/usr/local/binROUTER_HTTPS_PORTsets the port for the management ingress, where the dashboard and ICP services can be found. Defaults to8443if not setUSERNAMEsets the admin username to use when connecting to the cluster. Defaults toadminif not setPASSWORDsets the admin password to use when connecting to the cluster. Defaults toadminif not setKUBE_APISERVER_PORTsets the Kubernetes API Server port. Defaults to8001if not setIPSEC_ENABLEDset to "true" to enable IPSEC tests on the environment. IPSEC cannot be detected, so IPSEC tests will not run unless explicitly enabled
export SERVER="10.0.0.10"
export PASSWORD="MyVerySafePassword"
./run.sh
There's a number of helper functions available.
sequential helpers This is a framework specifically design to help with tests where there is a given relationship between the different test cases in a single test file.
You can easily indicate the applicability of all the tests in a single bats file by defining a function called applicable() which returns 0 (true) if the environment is applicable for the tests and 1 (false) if not.
If the applicable() function returns 0 in a given environment all tests defined in that file will automatically be skipped with the message Not applicable in this environment.
Example use
#!/usr/bin/env bats
load ${APP_ROOT}/libs/sequential-helpers.bash
function applicable() {
if [[ "${API_VERSIONS[@]}" =~ "metrics.k8s.io" ]]; then
return 0
else
return 1
fi
}
@test "usual test definition" {
run something as you normally would, but will only run if applicable
[[ "foo" == "bar" ]]
}
#!/usr/bin/env bats
load ${APP_ROOT}/libs/sequential-helpers.bash
function applicable() {
n=$(kubectl -n kube-system get deployments | grep myfeaturedeployment )
if [[ "$n" =~ "myfeaturedeployment" ]]; then
return 0
else
return 1
fi
}
@test "usual test definition" {
run something as you normally would, but will only run if applicable
[[ "foo" == "bar" ]]
}
A fairly commmon scenario is to create some environment that tests will be run against. This could be create a deployment that is scaled up, down, connected to, etc.... However, if the initial creation of the deployment is not successful there is no need to attempt to run all the test cases in the bats file.
To address this, you can simply define a create_environment() function to define how the environment should be created before start running tests. A destroy_environment function can be defined to clean up the environment after all test cases have run. Optionally you can also define an environment_ready function to determine when the environment is ready to start accepting test cases.
If create_environment() returns a status of 1 all tests in the file will be skipped or failed depending on setting. This is configured via the global ON_SETUP_FAIL variable which can be set to skip which will make all tests skip with message Environment setup failed, fail which will automatically fail all tests with message Environment setup failed or failfirst (default) which will fail the first test case and skip the remainding.
The environment_ready function will be retried at regular intervals and if does not return a status 0 within ENV_SETUP_TIMEOUT seconds all tests will be skipped or failed as with create_environment based on the ON_SETUP_FAIL setting, but with message Timed out waiting for environment to become ready
Example
#!/usr/bin/env bats
# Override some defaults
ON_SETUP_FAIL="fail" # We want all test cases to fail if environment setup fails
ENV_READY_SLEEP="2" # Seconds between each attempt to run environment_ready function
ENV_READY_TIMEOUT="60" # Seconds before timing out waiting for environment to become ready
load ${APP_ROOT}/libs/sequential-helpers.bash
create_environment() {
kube create -f ${TEST_SUITE_ROOT}/mytest/templates/myapp.yaml
return $?
}
environment_ready() {
status=$(kube get pods -l run=nginx,test=deployment --no-headers | awk '{print $3}')
if [[ "$status" == "Running" ]]; then
return 0
else
return 1
fi
}
destroy_environment() {
kube delete -f ${TEST_SUITE_ROOT}/mytest/templates/myapp.yaml
}
@test "usual test definition" {
run something as you normally would
[[ "foo" == "bar" ]]
}
Sometimes you may want failed tests to be left intact so the environment can be used for debugging at a later point. Where the create_environment and destroy_environment methods are used, you can can set this behaviour via the ROTATE_NAMESPACE variable. This variable can be set globally or optionally overwritten per test file.
When ROTATE_NAMESPACE is enabled, a failure will cause the destroy_environment not to be called, a new namespace called NAMESPACEN where N is a number series starting at 1 for the first failed test.
ROTATE_NAMESPACE can be set to on_setup_fail which triggers this behaviour if create_environment or environment_ready fails, on_test_fail which triggers this behaviour if fail_subsequent or skip_subsequent are called, and on_any_fail which is a combination of these.
Some times we need to run a series of test cases that are dependent on each other. For example,
- add user X,
- then perform action Y with the new user X,
- then delete user X
- validate that something that was expected to happen happened
These are connected sequence of events, and if we had a problem in any of these steps, we do not want to attempt the subsequent steps.
For these scenarios we have a function assert_or_bail, which will "bail" subsequent tests with skip or fail if a given assertion fails.
So for example
@test "create user x" {
run "command to create user x"
assert_or_bail "[[ '$output' =~ 'User x created' ]]"
}
@test "User X should not be allowed to access resource y" {
run "command for x to access y"
assert_or_bail "[[ $status -eq 1 ]]"
assert_or_bail "[[ '$output' =~ 'Access denied' ]]"
}
@test "User x can be deleted" {
run "command to delete user x"
assert_or_bail "[[ '$output' =~ 'User deleted' ]]"
}
@test "Delete user should cleanup some group" {
run "command to query group"
assert_or_bail "[[ ! '$output' =~ 'user x' ]]"
}
NOTE: When using assert_or_bail any complex assertion using compound commands (such as [[) must be passed within single or double quotes. Since assertions typically use variables, you will want to double quote the outer statement and single quote the variable. For example assert_or_bail "[[ '$foo' == 'bar' ]]"
The global variable ON_ASSERT_FAIL can be set to skip_subsequent or fail_subsequent, which will lead subsequent tests to be skipped or failed after an assertion has failed.
When the global variable ROTATE_NAMESPACE is set to on_test_fail or on_any_fail, an assert_or_bail failure will also cause a namespace rotation and skip destroy_environment as described above.
It is also possible to call skip_subsequent and fail_subsequent at any stage to skip subsequent tests.
For example
@test "test something" {
run some random command that does something
if [[ "$output" =~ something ]]; then
# There may be some conditions where we determine that
# the rest of the tests in this file can be skipped
skip_subsequent
fi
[[ $status -eq 0 ]]
}
There are several global variables you can use to introspect on Bats tests
$BATS_TEST_FILENAMEis the fully expanded path to the Bats test file.$BATS_TEST_DIRNAMEis the directory in which the Bats test file is located.$BATS_TEST_NAMESis an array of function names for each test case.$BATS_TEST_NAMEis the name of the function containing the current test case.$BATS_TEST_DESCRIPTIONis the description of the current test case.$BATS_TEST_NUMBERis the (1-based) index of the current test case in the test file.$BATS_TMPDIRis the location to a directory that may be used to store temporary files.
-
$NAMESPACEis the namespace to run tests against / in -
$ARCHis the processor architecture targeted for the tests -
$K8S_SERVERVERSION_MAJORis the Kubernetes Server Major version -
$K8S_SERVERVERSION_MINORis the Kubernetes Server Minor version -
$K8S_SERVERVERSION_STRis the Kubernetes Server gitVersion string -
$K8S_CLIENTVERSION_MAJORis the kubectl Major version -
$K8S_CLIENTVERSION_MINORis the kubectl Minor version -
$K8S_CLIENTVERSION_STRis the kubectl gitVersion string -
$ICPVERSION_MAJORis the ICP platform Major version -- i.e. 2 in 2.1.0.3 -
$ICPVERSION_MINORis the ICP platform Minor version -- i.e. 1 in 2.1.0.3 -
$ICPVERSION_PATCHis the ICP platform Patch version -- i.e. 0 in 2.1.0.3 -
$ICPVERSION_REVis the ICP platform Revision version (if available) -- i.e. 3 in 2.1.0.3 -
$ICPVERSION_STRis the ICP platform full version string -
$API_VERSIONSarray of kubernetes api versions
Since the test suite is expected to be run on both airgapped and non-airgapped environments, it is desirable to limit the amount of images that are used, so as to limit the amount of images that will need to be added to air gapped environments as a prerequisite to running the tests.
Suggested images to use:
nginx-- where network functionality is required, i.e. requiring something to listen to a portbusybox-- small lightweight image with most useful tools available
One “gotcha” that might come up after using Bats for a little while is testing the result of piped commands.
For example, here’s a command which echos a string, the slices it up on spaces, and picks the second value:
$ echo 'foo bar baz' | cut -d' ' -f2
bar
So you might expect a test like this to pass:
#!/usr/bin/env bats
@test "Test that we get the word 'bar'" {
run echo 'foo bar baz' | cut -d' ' -f2
[ $output = "bar" ]
}
But in reality it fails:
$ bats gotcha1.bats
✗ Test that we get the word 'bar'
(in test file /Users/ross/bats/gotcha1.bats, line 5)
1 test, 1 failure
This can be a bit of a head scratcher at first, but it makes sense when you realize that the run command is just like any other Unix command, and that run (which has no output) is in fact being piped to cut.
The solution is to encapsulate the entire command being tested as a bash -c inline string:
#!/usr/bin/env bats
@test "Test that we get the word 'bar'" {
run bash -c "echo 'foo bar baz' | cut -d' ' -f2"
[ "$output" = "bar" ]
}
Which produces the desired result:
$ bats gotcha1.bats
✓ Test that we get the word 'bar'
1 test, 0 failures