Skip to content
AJ Foster edited this page Jul 29, 2015 · 3 revisions

Transit has an embedded test framework (located in transit/test/). This framework has access to individual functions for unit tests as well as the transit main() for integration tests.

Running the Test Suite

To test the Transit/RT module, cd to the transit directory and run make test. The module will be recompiled and tested according to the main function in test.c. Be aware that existing executable files will be removed prior to compilation.

Test Structure

There are several layers within the framework. Here they are described from general to specific:

  • transit/test/test.c is the test driver. When the Transit/RT module is compiled using the TEST_TRANSIT compiler directive, main() is located here.
    • main() has a setup function, tr_setup_tests(), which sets up the test suite.
    • Tests and test batches are called here.
    • main() also has a teardown function, tr_finish_tests(), which cleans up the test suite.
  • transit/test/test_*.c are other test files. It is customary to name them based on the .c files they test, i.e. test_opacity.c contains batches and tests for opacity.c.
  • Test batches are functions that act as collections of tests.
    • Each test batch begins with a setup function, tr_setup_batch().
    • Tests and other batches can be called here.
    • Each test batch ends with a teardown function, tr_finish_batch(), which acts as the return statement.
  • Tests are functions that act as collections of assertions.
    • Each test is expected to return NULL at the end (unless an assertion causes otherwise).
    • Tests will run every assertion unless a skip (tr_skip()) or assertion failure occurs.
    • If an assertion fails, other assertions within that test will not be run, but future tests will.
  • Assertions are specific checks to test the module.
    • There are many types of assertions (see below).
    • Each assertion has a message attached to describe what it means if the assertion fails.

Writing Tests

Below is an example main(), test batch, passing test, and failing test. Example output is included.

TR_TEST test_that_true_is_true () {
  tr_assert(1, "One is not true");
  return NULL;
}

TR_TEST test_that_false_is_true () {
  tr_assert(0, "Zero is not true");
  tr_assert(NULL, "NULL is not true");
}

TR_BATCH test_truth_values () {
  tr_setup_batch();
  tr_run_test(test_that_true_is_true);
  tr_run_test(test_that_false_is_true);
  tr_finish_batch();
}

// This would be located in transit/test/test.c
int main (int argc, char **argv) {
  tr_setup_tests();
  tr_run_batch(test_truth_values);
  tr_finish_tests();
}

The above creates the following output:

Removing non-source files.
Compiling src/argum.o.
(...)
Compiling test/test.o.
Building executable "test_transit".
Starting test suite.
test_that_true_is_true: Pass
test_that_false_is_true: Fail
  Zero is not true

Summary | Tests: 2, Failures: 1

There are a few important things to note here:

  • When make test is run, all compiled files are removed. Compilation is the first test!
  • You can run the test_transit executable yourself, if you desire.
  • When a test passes (i.e. none of its assertions fail), "Pass" is displayed beside the test name.
  • Test names are flushed through the output buffer before the test itself is run. Thus, any error messages should be displayed beside their test names.
  • If a test encounters a failed assertion (tr_assert(0, "Zero is not true")), the test terminates immediately. Note that the assertion labeled "NULL is not true" was not tested.
  • Similarly, assertions are skipped if you use tr_skip().

In an ideal world, tests would continue even in the event of a runtime error. Unfortunately, such error handling is not possible.

Writing Assertions

Every assertion has a logical component and a message. Assertion messages should describe what is happening if the assertion fails. This makes the problem easy to understand in the test output.

Below is a comprehensive list of assertions you can use:

  • tr_assert(assertion, message): Tests the truth value of the given assertion, which is expected to be a boolean statement or value. If the test returns false, then the given message is returned as a failed assertion.
    • tr_assert(some_var, "Some_var is not true")
    • tr_assert(double_var < 2.0, "Double_var is not less than two")
    • tr_assert(some_func(), "Some_func returned zero")
  • tr_assert_not(assertion, message): Tests the truth value of the given assertion, which is expected to be a boolean statement or value. If the test returns true, then the given message is returned as a failed assertion.
    • tr_assert_not(some_var, "Some_var is not false")
    • tr_assert_not(double_var < 2.0, "Double_var is less than two")
    • tr_assert_not(some_func(), "Some_func returned a non-zero value")
  • tr_assert_equal(value 1, value 2, message): Tests for integer equality between the given values. This is not intended to be used for floating point values (see tr_assert_close) or strings (see tr_assert_equal_str). Note that this test can be relevant for other boolean-like values such as 0, 1, NULL, etc.
    • tr_assert_equal(some_var, 10, "Some_var was not 10")
    • tr_assert_equal(some_func(), -1, "Some_func did not return -1")
  • tr_assert_not_equal(value 1, value 2, message): Tests for integer inequality between the given values. This is not intended to be used for floating point values (see tr_assert_close) or strings (see tr_assert_not_equal_str). Note that this test can be relevant for other boolean-like values such as 0, 1, NULL, etc.
    • tr_assert_not_equal(some_var, 1, "Some_var was 1")
    • tr_assert_not_equal(some_func(), NULL, "Some_func returned NULL")
  • tr_assert_close(value 1, value 2, margin, message): Tests that the two values - assumed to be floating point or promotable - are within the given margin (in absolute value).
    • tr_assert_close(some_var, 1.0, 0.001, "Some_var was not within 0.001 of 1")
    • tr_assert_close(some_func(), 10.0, 1.0, "Some_func returned a value not near 10")
  • tr_assert_equal_str(string 1, string 2, message): Tests that the given strings (expected to be given as (char *)) are equal, using strcmp(). If unequal, the given message is returned as a failed assertion.
    • tr_assert_equal_str(some_func(), "Example String", "Some_func did not return 'Example String'")
  • tr_assert_not_equal_str(string 1, string 2, message): Tests that the given strings (expected to be given as (char *)) are not equal, using strcmp(). If equal, the given message is returned as a failed assertion.
    • tr_assert_not_equal_str(some_func(), "Example String", "Some_func returned 'Example String'")

Other Functions

Skipping assertions is possible using tr_skip() within a test. This immediately returns from the test and is useful for debugging purposes.

Calling main() is possible using tr_run_main(). You can call tr_run_main(...) with argc- and argv-like parameters, and transit.c's main function will be called accordingly. For this, you will likely want to use fixtures (described below).

Fixtures

Fixtures are external files upon which tests might depend. For example, you can make any number of input files and use them inside your tests. For cleanliness, fixture files are kept in transit/test/fixtures.