Test charter

jawi edited this page Mar 13, 2011 · 5 revisions

Table of Contents


The purpose of this test charter is to have some sort of guideline for testing new/upcoming releases of the OLS client. The intended audience of this document are (beta) testers of the OLS client.

This document will describe some basic use cases the OLS client needs to fulfil and shows the areas of interest that might need additional testing effort. Of course, if any other test cases are needed, they can be performed at will.


In the following sections, some of the basic testing areas are covered. All additional test cases that might be of interest can be performed at will and will always lead to better software. Any additions to this charter is possible and can be discussed on the forum.

Platform tests

These tests are intended to verify whether the OLS client works on the target platform. Current supported target platforms are:

  • Linux 32/64-bit (any flavour);
  • Microsoft Windows 32/64-bit (XP, Vista, W7);
  • Mac OSX 32/64-bit (10.5+);
  • Sun Solaris 32-bit (no 64-bit!).
The precondition for this test is that a valid Java Runtime Environment is installed. Currently supported is version 1.6.x.

The following points of interest are to be considered:

  • Does the installation package extract without errors on the target platform?
  • When possible, are the correct icons shown for the program (e.g., on Mac OS)?
  • Does the OLS client start with the provided scripts (if needed) directly without the need to change any (file and/or directory) attributes?
  • Does the console (if available) show any exceptions and/or errors?

Acquisition tests

These tests are to verify whether the "core" of the OLS client, the acquisition of signals, works correctly. These tests should preferably be performed with actual hardware devices! The following points of interest are to be considered:

  • Does the capture dialog show the correct port(s), or can the desired port to be entered manually?
  • Does a basic capture with the desired port work? Are the results of this capture as expected?
  • Are other capturing settings taken into consideration? Are the results of these captures as expected?
  • Do the device profiles work for the desired device (provided the device hardware is present)?
  • Does the acquisition complete within expected time bounds, i.e., given the capture speed and sample count, does the acquisition complete within sample count / capture speed? Does this work with and without any signal sources attached to the channels?
  • Are the various channel groups taken into consideration with the acquisition. For example, if capturing only channel group 1, does only that group appear in the capture results?

Tooling tests

There are several plugins that provide additional tooling, such as protocol decoders, exporting functionality and so on. Most of these plugins are also covered by automatic tests (JUnit), providing some insights on their expected behaviour. The following points of interest are to be considered:

  • Does the protocol decoder/exporter/... start properly, showing its configuration dialog?
  • Are its settings retained after closing any of its dialogs?
  • Are the results of the protocol decoder/exporter/... as expected? Also with "real" sample data?
  • If supported, does the decoder tool's export functionality work as expected?
  • If supported, does the decoder tool's annotations appear in the signal dialog?

User interface/usability tests

These tests are to verify whether the UI and (basic) usability aspects work for the OLS client. These tests are very platform specific and involve the following areas of interest:

  • Do the basic shortcut keys work for the target platform (CMD+W/CTRL+W/Escape to cancel/close a dialog, CMD+Q/CTRL+Q to quit the client, and so on);
  • Do the shortcut keys/accelerator keys as shown in the menu bars work?
  • Do other platform specific usability features that are available work as expected?
  • Are user settings retained after closing the client?

Other tests

Most probably, a new release of the OLS client also provides bugfixes for issues that are reported on GitHub. The convention is to mark all bugs/features/improvements that are fixed and/or delivered in a new release as done. When possible, all items with the tag "done" need to be tested.


For each new release, a subtopic will be created on the Dangerous Prototypes forums. This subtopic will be open only during the testing period and should be used only to communicate either successful or unsuccessful tests/testcases. The topic starter post will be used to inform tester where the upcoming release is to be found, and, when possible, the progress of testing (including a potential release date).

Note: a test is considered successful if it succeeds without any errors/unexpected behaviour.

When a tester has performed several tests (either successfully or unsuccessfully), it should be reported in the subtopic. At least the following information should be present:

  • The tested target platform(s);
  • The outcome of the test(s) (and if unsuccessful, the expected outcome);
  • The results of any tested GitHub issues, including the issue-number.