In white-box testing, the internal details of the system under test (SUT) are known. There is the need to able to access to the source code (or bytecode, for JVM languages) of the SUT. This is usually not a problem when testing is done by the developers of the SUT themselves.
A white-box test approach can aim at maximizing the code coverage of the SUT. This is helpful in at least two ways:
- Fault Detection: the higher the code coverage, the more likely it is to find a bug in the SUT. A bug can only manifest itself if the faulty statements are executed at least once.
- Regression Testing: even if no fault is found, the generated tests can still be useful to check later on for regression faults. And for fault detection, the higher code coverage the better.
To measure code coverage, the SUT needs to be instrumented, by putting probes in it. In JVM languages, this can be done automatically by intercepting the class loaders, and then use libraries like ASM to manipulate the bytecode of SUT classes at runtime.
But measuring code coverage alone is not enough to generate high coverage test cases. Consider this trivial code snippet:
Using a black-box approach in which inputs are randomly generated, a test input would only have 1
single probability out of 2 at the power of 32 (i.e., around 4 billion possibilities in a 32-bit number)
to cover the
then branch of that
But, a static/dynamic analysis of the code would simply point out to use the value
This is a trivial example, but predicates in the source code can be arbitrarily complex, for example involving regular expressions and the results of accesses to SQL databases. EvoMaster uses several different heuristics and code analysis techniques to maximize code coverage using an evolutionary algorithm. In the academic literature, this is referred as Search-Based Software Testing. The interested reader is encouraged to look at our academic papers to learn more about these technical details.
These static and dynamic code analyses do require accessing the source code, and instrument it before the SUT is started. But this can be done together when the SUT is instrumented to measure its code coverage. All the instrumentations and code analyses are automatically performed by EvoMaster with a library we provide (e.g., on Maven Central for JVM languages).
A user needs to provide a script/class (called driver) in which the SUT is started, with instrumentations provided by our library. This must be done manually, as each different frameworks (e.g., Spring and DropWizard) has its own way to start and package an application. Once a user has to provide a driver to start the SUT, adding the options to stop and reset the SUT should not be much extra work. Once this is done, the test cases automatically generated by EvoMaster become self-contained, as they can use such driver. For example, they can start the SUT before the tests, reset its state at each test execution to make them independent, and then finally stop the SUT after all tests are completed.
We explain how to write such script/class in this other document.
To check it out before spending time writing one, you can look at the
EMB repository and search for classes called
Start one of those directly from your IDE.
This will start the controller server (binding by default on port
40100) for one of the SUTs in the
The controller server is responsible to handle the start/reset/stop of the SUT.
Once it is up and running, you can generate test cases for it by running EvoMaster from command-line with:
java -jar evomaster.jar
By default, EvoMaster will try to connect to a controller server that is listening on port 40100. Its first step will be to tell it to start the SUT with all the required instrumentations. Then, it will finally start an evolutionary algorithm to evolve test cases, and measure their fitness when executed against the SUT. To see which options to use when running EvoMaster (e.g., for how long to run the evolution), see the main options.