Skip to content
This repository has been archived by the owner on Oct 3, 2023. It is now read-only.

Multi objective Search

Rui Wang edited this page Feb 22, 2021 · 5 revisions

Modbat now supports a search-based approach relying on reinforcement learning for test case generation. The approach considers test case generation as an exploration versus exploration dilemma, and we address this dilemma by using a particular strategy of multi-armed bandits with multiple rewards.

To use this search-based approach, the user needs to provide values for 9 input parameters using flag options, including:

  1. --bandit-tradeoff gives a value for the tradeoff of the bandit strategy (you can use 2 as a default value).
  2. --backtrack-t-reward gives a reward value for the backtracked transition.
  3. --self-t-reward gives a reward value for the self-transition.
  4. --good-t-reward gives a reward value for the good/successful transition.
  5. --fail-t-reward gives a reward value for the failed transition.
  6. --precond-pass-reward gives a reward value for the passed precondition.
  7. --precond-fail-reward gives a reward value for the failed precondition.
  8. --assert-pass-reward gives a reward value for the passed assertion.
  9. --assert-fail-reward gives a reward value for the failed assertion.
    Please note that all reward values should be in [0, 1].
    Then, to use the search-based approach, the user also needs to set the search mode by the flag option --search, which should be --search=heur.
    If you don't want to use the search-based approach, you can set --search=random, which enables random test case generation (default setting for Modbat), and you don't need to use reward flag options.

Example:

 modbat/build$ scala modbat.jar --classpath=modbat-examples.jar \
                                -n=1 -s=2455cfedeadbeef \
                                --no-redirect-out \
                                --dotify-path-coverage \
                                --search="heur" \
			        --bandit-tradeoff=2 \
	                        --backtrack-t-reward=0.8 \
                                --self-t-reward=0.4 \
		                --good-t-reward=0.6 \
			        --fail-t-reward=0.5 \
			        --precond-pass-reward=0.7 \
			        --precond-fail-reward=0.7 \
			        --assert-pass-reward=0.7 \
			        --assert-fail-reward=0.7 \
                                modbat.examples.SimpleModel

These rewards parameters can also be tuned to find the optimized settings by using multi-objective optimization approach. For the details, please check the reference.

Reference

  1. Multi-objective Search for Model-based Testing

Installation

Building from source

Model syntax

Basic usage

Run examples

Read the error trace

Semantics of model

Preconditions, assertions and inheritance

Testing Modbat

Advanced features

API functions

Observer models

Annotations

Helper method annotations

Field annotations

Visualization

Basic visualization

Compile your own model and replaying traces

Other test configuration options

References

Semantics of basic models

Clone this wiki locally