Skip to content

Why and how to write unit tests

lmarceau edited this page Nov 30, 2023 · 11 revisions

Table of content

  1. Why unit test?
  2. What is a unit test?
  3. How to write unit tests
    1. Best practices
    2. Lifecycle
    3. Managing tests dependencies
    4. Mocking
    5. Subclass and overrides
  4. Codebase red flags

Why unit test?

We can use all kinds of feedback loops to detect problems in production code. For example, we can keep an eye on crash reports, bug issues from QAs and customer complaints. But that's the longest loop. After making an incorrect change, it takes a long time (weeks or months) to get that feedback. The easiest and fastest feedback is having automated testing that runs on every change we make in our CI pipeline. This is what unit tests are for. In short, unit tests:

1. Catch bugs early.

By running unit tests frequently during the development process, you can catch bugs and issues early on in the development cycle, when they are easier and cheaper to fix.

2. Make changes with confidence.

Unit tests provide a safety net when refactoring or modifying existing code. If you make a change that causes a test to fail, you know immediately that something went wrong, and you can fix it before it gets further down the line (into QA hands for example, or worst case scenario, in Beta or Release).

3. Improves code quality

It improves the code quality by reducing the code complexity. Writing unit tests forces you to think more carefully about the design and functionality of your code, which can lead to better code quality overall. Additionally, having a comprehensive suite of tests can help to ensure that your code is robust and reliable.

4. Be faster

You can develop faster. While it may seem like writing tests takes extra time, in the long run, it can actually speed up development. By catching bugs early and preventing regressions, you can avoid wasting time and resources on fixing problems that could have been prevented.

5. Better collaboration

Unit tests provide a shared understanding of how a piece of code is intended to work, while also documenting it. By having a suite of tests that everyone can run and rely on, it can facilitate collaboration and make it easier for team members to work together.

What is a unit test?

Unit tests are a subset of automated tests where the feedback is quick, consistent, and unambiguous.

  • Quick: A single unit test should complete in milliseconds, enabling us to have thousands of such tests.
  • Consistent: Given the same code, a unit test should report the same results. The order of execution shouldn't matter. Global state shouldn't matter.
  • Unambiguous: A failing unit test should clearly report the problem detected.

How to write unit tests

A test suite goal is to test the functionalities of a class, ensuring any modifications in that part of the code will not introduce a regression. This class under test can be called our subject, also called system under test. We prefer to use subject in Firefox iOS. In our project, we use the provided XCTestCase to create our test suites. If you're unfamiliar with unit testing with Swift here's a good article on how to get started, as the rest of this page assumes you know the basics.

Best practices

Writing effective unit tests is crucial for ensuring the reliability and maintainability of our codebase. Follow these guidelines to create robust and valuable unit tests:

1. Test All Code Paths

It’s important to test both expected behavior and potential edge cases or error scenarios. Ensure thorough coverage to catch any issues that might arise in different scenarios.

2. Given, When, Then Principle

Organize each test case using the Given, When, Then (or Arrange, Act, Assert) principle. Add blank lines between each phase to clarify the function of each line of code in your tests.

3. Using mocks

Use mocking to allow you to create controlled environments for your tests and test only the specific behavior you're interested in. More on how to do so in next mocking section.

4. Use Descriptive Naming

Clearly describe what is being tested in the name of your test functions. Apply the Given/When/Then principle in function names to enhance readability and communicate to others what this test is actually about.

5. One Use Case Per Test

Keep tests small and focused by addressing one use case per unit test. This approach makes tests easier to debug and maintain in the long term.

6. Aim for Maximum Code Coverage

Strive for comprehensive test coverage within the constraints of your project. While 100% coverage may not always be feasible, aim for the highest coverage given your specific constraints.

7. Avoid Conditional Branches in Test Code

Keep test code simple by avoiding conditional branches. Choose assertions that express the conditions you need to test.

8. Use the Right Assertions

Choose appropriate assertions to check if the test is passing or failing. Examples include XCTAssertFalse, XCTAssertTrue, XCTAssertNil, XCTAssertNotNil, XCTAssertEqual, etc.

9. Lifecycle Management

Understand XCTest's test lifecycle. Utilize setUp and tearDown methods to create and nullify any stored properties in XCTestCase subclasses. More in the next section about lifecycle.

10. Quick, Consistent, Unambiguous Feedback:

Ensure that each unit test provides quick, consistent, and unambiguous feedback. Quick feedback enables running thousands of tests, consistent results given the same code, and clear

11. Continuous Learning

Stay updated on unit testing best practices. Embrace a mindset of continuous improvement, adapting your testing strategies as needed.

Lifecycle

There's more to tests than assertions. When does XCTest create and run the tests? To avoid flaky tests, we want to run each test in a virtual clean room. There should be no leftovers from previous tests or from manual runs.

Each test method runs in a separate instance of it's XCTestCase subclass. These instances live inside a collection of all tests, which the test runner is iterating over. So all tests cases exists before test execution and are never deallocated until the test runner terminates. This means it's important to use setUp and tearDown methods to respectively create and nullify any stored properties in our XCTestCase subclasses. The preferred method is to use unwrapped optional:

    private var subject: SubjectClass!

    override func setUp() {
        super.setUp()
        subject = SubjectClass()
    }

    override func tearDown() {
        super.tearDown()
        subject = nil
    }

Managing tests dependencies

When testing is difficult, this reveals flaws in the architectural design of the code. By making changes to enable testing, you'll be shaping the code into a cleaner design. Design decisions that were once hidden and implicit will become visible and explicit.

But what are difficult cases to test? It includes cases that are:

  • Slow: Code that execute in response to external triggers, i.e. if there's no way to trigger the code execution immediately.
  • Non-isolated: Dependencies that break the rule of isolation, such as global variables and persistent storage. We can think of singletons or static properties as well.
  • Non-repeatable: Dependencies that yield a different result when called. Like current time or date or random numbers.
  • Side effects: Dependencies that cause side effects outside the invoked type, such as playing audio or video. There exists others cases, but those are good to keep in mind when planning how you'll write code and how it can be tested.

Once we've identified dependencies that make testing difficult, what should we do with them? We need to find ways to isolate them between boundaries. In other words, the subject under tests shouldn't care about implementation details of the dependencies it uses. Having them isolated enables us to replace them with substitutes during testing, so we keep the test consistent, quick and unambiguous. We can implement boundaries using protocols instead of relying on the concrete object in our production code, which will then enable us to use a technique called mocking in our test cases.

Mocking

To isolate the behavior of the object you want to test, you replace the other objects by mocks that simulate the behavior of the real objects. This is useful if the real objects are impractical to incorporate into the unit test (see previous section on difficult cases to test). In other words, mocking is creating objects that simulate the behavior of real objects.

Here's an example. Let's say our subject to test is called SnowMachine, and depend on the Weather to create snow.

class SnowMachine {
    // Create snow from a certain water quantity, depends on the weather to cristalize snow
    func createSnow(with waterQuantity: Int) -> Int {
        guard Weather().getTemperature() <= 0 else { return 0 }

        return waterQuantity / 2
    }
}

class Weather {
    private var temperature: Int = 0 // in Celcius

    func getTemperature() -> Int {
        // Imagine here a complicated algorithm we don't control to determine the temperature
        return temperature
    }
}

Now our current SnowMachine depends on the concrete type of Weather. If we want to unit tests this class, we cannot control the temperature input and therefor wouldn't be able to test the different edge cases of snow making. A better approach would be using dependency injection and a protocol such as:

protocol WeatherProtocol {
    func getTemperature() -> Int
}

class SnowMachine {
    private var weather: WeatherProtocol

    init(weather: WeatherProtocol) {
        self.weather = weather
    }

    // Create snow from a certain water quantity, depends on the weather to cristalize snow
    func createSnow(with waterQuantity: Int) -> Int {
        guard weather.getTemperature() <= 0 else { return 0 }

        return waterQuantity / 2
    }
}

This way we can create a mock of the Weather class, that we'll be able to inject into our subject to test the different cases and control the temperature. Example:

final class SnowMachineTests: XCTestCase {
    private var weather: MockWeather!

    override func setUp() {
        super.setUp()
        weather = MockWeather()
    }

    override func tearDown() {
        super.tearDown()
        weather = nil
    }

    func testSnowCreationWhenColdTemperatureThenHalfSnowCreated() {
        weather.mockedTemperature = -10
        let subject = SnowMachine(weather: weather)

        let result = subject.createSnow(with: 10)

        let expectedResult = 5
        XCTAssertEqual(result, expectedResult, "Quantity of snow created is half of the water input")
    }

    func testSnowCreationWhenWarmTemperatureThenNoSnowCreated() {
        weather.mockedTemperature = 10
        let subject = SnowMachine(weather: weather)

        let result = subject.createSnow(with: 10)

        let expectedResult = 0
        XCTAssertEqual(result, expectedResult, "There was no snow create as the temperature is too warm")
    }

    func testSnowCreatedWhenZeroTemperatureThenHalfSnowCreated() {
        weather.mockedTemperature = 0
        let subject = SnowMachine(weather: weather)

        let result = subject.createSnow(with: 10)

        let expectedResult = 5
        XCTAssertEqual(result, expectedResult, "Quantity of snow created is half of the water input")
    }
}

final class MockWeather: WeatherProtocol {
    var mockedTemperature = 0
    func getTemperature() -> Int {
        return mockedTemperature
    }
}

Subclass and overrides

Note that with legacy code, it can be necessary to use techniques such as subclassing or overriding to be able to write unit tests. The idea is to create a subclass of production code that lives only in test code, giving us a way to override methods that are problematic for testing. Subclassing and overriding shouldn't be used as testing strategies in new code, and if you feel yourself forced to used such strategy to test code please discuss with the team so we can find a solution together.

Codebase red flags

You should always be on the lookout for the following red flags to help maintain a modular and scalable codebase. Unit tests can be a great way to identify those:

1. Isolation and Independence

Complexity in mocking dependencies may indicate tight coupling.

2. Dependency Injection

Difficulty testing due to many injected dependencies suggests high coupling.

3. Refactoring Challenges

Extensive test modifications for a single code change imply tight coupling.

4. Test Failure Patterns

Consistent failures in unrelated tests indicate hidden coupling.

5. Code Coverage Analysis

Difficulty increasing coverage may signal poorly-contained dependencies.

Clone this wiki locally