Skip to content

Testing

Vanskarner edited this page Sep 7, 2023 · 7 revisions

This section is based on the book XUnit Test Patterns: Refactoring Test Code (2007) by Gerard Meszaros, presenting the content in a simplified way and focusing on the essential fragments to understand what is important in testing. For more detailed information on test-related patterns and other concepts, it is recommended to consult the book.

Testing

Every good architecture has tests that help ensure software quality, validate changes, maintain the integrity of the architecture and facilitate refactoring. In other words, tests, like production code, are also designed to generate high cohesion, low coupling structures.

Aspects

  • Tests, like production code, use SOLID principles and component principles.
  • They follow the Clean Architecture dependency rule.
  • Its function is to support development and not operation.
  • A test that is difficult to prove is generally poorly designed.
  • All tests are the same from an architectural point of view.

Nomenclature conventions

Generally, naming conventions will depend on your development team; however, in general terms, they can be summarized as follows:

When you need to add context When context is not needed
[Feature to test] + [Test scenario] + [Expected behavior] [Feature to test] + [Expected behavior]
Examples for Use Cases:

execute_withValidID_itemExists
testExecuteWithValidIDShouldBeTheItem
Examples for Use Cases:

execute_numberItemsDeleted
testExecuteShouldGetNumberItemsDeleted

Basic concepts

  1. Fixtures: Also known as "Test Fixtures", it refers to everything necessary to be able to run the system under test (SUT).

  2. System Under Test (SUT): The system under test is whatever is being tested. This can include a class, a method, a component, or even the entire application, depending on the type of testing being performed.

Example

public class CheckItemUseCaseTest {

    @Test
    public void execute_withValidID_itemExists() throws Exception {
        //Start: Fixtures
        Item item = createSampleItem();
        ExecutorService executorService = TestExecutorServiceFactory.create();
        Repository fakeRepository = FakeRepositoryFactory.createRepository();
        fakeRepository.saveItem(item).await();        
        CheckItemUseCase useCase = new CheckItemUseCase(executorService,fakeRepository);
        //End: Fixtures

        //Start: SUT
        boolean exists = useCase.execute(item.id).get();
        //End: SUT

        assertTrue(exists);
        usecase.clear();
    }

    //...

}

Phases

Any test may present the following phases:

  1. Setup: Preparation of the resources and initial conditions necessary for the test, such as object creation or environment configuration. Here the test fixtures are created and configured.
  2. Exercise: Execution of the system under test (SUT).
  3. Verify: Verification of the behavior or result of the execution.
  4. Teardown: Release of resources or restoration of the environment to its original state, ensuring that the test has no side effects that may affect other tests.

Using test methods only

Each test method creates or delegates the creation of its own test fixtures and is independent of the other tests. These test methods should include, as far as possible, only the fixtures necessary for their purpose.

Advantages Disadvantages
- Independent tests: Each test does not depend on the others so they will not cause problems. - Repetitive code: The amount of repetitive code resulting from the creation or configuration of accessories is evident.
- Slow tests: As each test creates and configures its fixtures it makes the execution of the test class slow.

In the following example, the execution of the tests presents no problems, but there is room for improvement due to the obvious amount of repetitive code.

public class CheckItemUseCaseTest {

    @Test
    public void execute_withValidID_itemExists() throws Exception {
        //Setup        
        Item item = createSampleItem();
        ExecutorService executorService = TestExecutorServiceFactory.create();
        Repository fakeRepository = FakeRepositoryFactory.createRepository();
        fakeRepository.saveItem(item).await();
        CheckItemUseCase useCase = new CheckItemUseCase(executorService,fakeRepository);
        //Exercise
        boolean exists = useCase.execute(item.id).get();
        //Verify
        assertTrue(exists);
        //Teardown
        usecase.clear();
    }

    @Test
    public void execute_withInvalidID_itemNotExists() throws Exception {
        //Setup 
        ExecutorService executorService = TestExecutorServiceFactory.create();
        Repository fakeRepository = FakeRepositoryFactory.createRepository();
        CheckItemUseCase useCase = new CheckItemUseCase(executorService,fakeRepository);
        //Exercise
        boolean exists = useCase.execute(666).get();
        //Verify
        assertFalse(exists);
        //Teardown
        usecase.clear();
    }

    //...

}

Using setup/teardown implicit

It consists of grouping the setup and teardown phases of common accessories for several tests, so that each test calls these phases implicitly during its execution, allowing the particular test to focus on what is important.

Advantages Disadvantages
- Increased readability: By having the necessary but irrelevant accessories available elsewhere, the test becomes easier to understand.
- Elimination of code duplication: Much of the code duplication resulting from the setup and teardown phase disappears.
- Possibility of obscure tests: These are tests that are difficult to understand at first glance, and this occurs when the specific (uncommon) accessories of a test are not visible to understand that particular test.
- Possibility of fragile tests: Occurs when there are tests that do not really require identical fixtures and when a common fixture is modified to fit a new test, several other tests fail.

In the following example, implicit setup and teardown are used correctly. This version uses JUnit 4.0 and has the @Before tag for the setup phase and the @After tag for the teardown phase. Therefore, the execute_withValidID_itemExists method will first invoke setup, then execute its contents, and finally call tearDown, a process that will be repeated in the other tests.

public class CheckItemUseCaseTest {
    Repository fakeRepository;
    CheckItemUseCase useCase;

    @Before
    public void setUp() {
        //Setup
        ExecutorService executorService = TestExecutorServiceFactory.create();
        fakeRepository = FakeRepositoryFactory.createRepository();
        useCase = new CheckItemUseCase(executorService,fakeRepository);
    }

    @After
    public void tearDown() {
        //Teardown
        usecase.clear();
    }

    @Test
    public void execute_withValidID_itemExists() throws Exception {
        //Setup
        Item item = createSampleItem();
        fakeRepository.saveItem(item).await();
        //Exercise
        boolean exists = useCase.execute(item.id).get();
        //Verify
        assertTrue(exists);
    }

    @Test
    public void execute_withInvalidID_itemNotExists() throws Exception {
        //Exercise
        boolean exists = useCase.execute(666).get();
        //Verify
        assertFalse(exists);
    }

    //...

}

Using setup/teardown in shared suite fixtures

It consists of grouping the setup and teardown phase of those essential shared accessories that need to be run only once. In this way, the setup phase will be executed only once for all tests, and the teardown phase will be performed only at the end of the last test.

Advantages Disadvantages
- Faster testing: By configuring shared fixtures once for all tests instead of repeatedly for each test, time and resources are saved, speeding up test execution.
- Maintains consistency: Ensures that all tests that share fixtures run in a consistent state and reduces configuration errors.
- Increased initial complexity: Setting up shared fixtures may require more effort and initial planning, especially when you have trials with different fixture requirements.
- Potential for unwanted side effects: If not handled properly, the use of shared accessories can lead to unwanted side effects between tests, which can make error detection and correction difficult.
- Possibility of fragile tests: When sharing fixtures between tests, there is a possibility that one test may accidentally affect the status of another test's fixtures, which can make test isolation difficult and lead to failures.

In the following example, the setup and teardown of shared fixtures is used correctly. This version uses JUnit 4.0 and has the @BeforeClass tag for the setup phase and the @AfterClass tag for the teardown phase. Therefore, when running CustomRepositoryTest, first the setupClass method is executed only once, then the other test methods are executed, and finally the tearDownClass method is called only once.

public class CustomRepositoryTest {
    static TestSimulatedServer simulatedServer;
    static TestJsonParser jsonService;
    static CustomRepository repository;

    @BeforeClass
    public static void setupClass() throws IOException {
        //Setup
        simulatedServer = TestSimulatedServerFactory.create(MovieRemoteRxRepositoryTest.class);
        simulatedServer.start(1010);
        jsonService = TestJsonParserFactory.create(MovieRemoteRxRepositoryTest.class);
        ExecutorService executorService = TestExecutorServiceFactory.create();
        repository = createRepository(executorService,simulatedServer.url());
    }

    @AfterClass
    public static void tearDownClass() throws IOException {
        //Teardown
        simulatedServer.shutdown();
    }

    @Test
    public void getItems_whenHttpIsOK_returnList() throws Exception {
        //Setup
        int anyPage = 1;
        String fileName = "upcoming_list.json";
        simulatedServer.enqueueFrom(fileName, HttpURLConnection.HTTP_OK);
        Items expectedList = jsonService.from(fileName, Items.class);
        //Exercise
        List<Item> actualList = repository.getItems(anyPage).get().list;        
        //Verify
        assertEquals(expectedList.results.size(), actualList.size());
    }

    //Here the Verify Phase is performed in: "(expected = RemoteError.ServiceUnavailable.class)"
    @Test(expected = RemoteError.ServiceUnavailable.class)
    public void getItems_whenHttpUnavailable_throwServiceUnavailable() throws Exception {
        //Setup
        int anyPage = 1;
        simulatedServer.enqueueEmpty(HttpURLConnection.HTTP_UNAVAILABLE);
        //Exercise
        repository.geItems(anyPage).get();
    }

    //Here the Verify Phase is performed in: "(expected = RemoteError.Unauthorised.class)"
    @Test(expected = RemoteError.Unauthorised.class)
    public void getItems_whenHttpUnauthorized_throwUnauthorised() throws Exception {
        //Setup
        int anyPage = 1;
        simulatedServer.enqueueEmpty(HttpURLConnection.HTTP_UNAUTHORIZED);
        //Exercise
        repository.getItems(anyPage).get();
    }

    //...

}

Types of software testing

Although all tests are the same from an architectural point of view, different types can be distinguished depending on the context:

Tests Description
Units Individual verification of methods or functions of classes, components or modules used by the software.
Integration Joint verification of the different modules or services used by the software.
Functional Verification of business requirements, expecting only the result of an action and neglecting the intermediate states necessary to complete that action.
Integrals Verification of the different user flows, so that they work as intended, this is done by replicating the user's behavior with the system.
Acceptance Verification of satisfaction of business requirements by running the entire application during testing, these are tests that simulate user behavior and even measure performance.
Performance Verification of system performance under a given workload to measure its responsiveness.
Smoke Verification of the main functions of the system to ensure its operation as intended through simple and quick tests.

Types of test doubles

A test double is a version of a class designed specifically for testing. It is intended to replace the real version of a class in testing.

Type Description
Fake It has a functional implementation to satisfy only the test.
Mock It emulates the behavior and may pass or fail depending on whether its methods were called correctly.
Stub It does not include logic and only returns what has been programmed.
Dummy It needs only to be passed but not to be used, usually as a parameter.
Spy Collects additional information, such as the number of times a specific method was called.

Test Driven Development (TDD)

Definitely, when talking about software testing, the concept of TDD comes to mind. TDD is a school of programming thought based on writing the tests first and then the code to satisfy those tests, so it is common that the first time a test is run it fails because the code needed to pass the test has not yet been written.

Representation
image
This continuous and repetitive development ensures that the code is always tested and maintains quality over time.
Clone this wiki locally