Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
172 lines (120 sloc) 7.13 KB

Quick Start

We only focus on "single machine tests" in the Quick Start guide. That is, tests which can run entirely on your local device. Some more advanced test cases require multiple machines networked together, e.g. a mobile device, a router device, a port spanning switch and a desktop device for collecting spanned traffic.

See TODO for more details on multi-machine tests

The test suite must always be run from a desktop device. We refer to this device as the "test orchestrator". For single machine tests, the test orchestrator is actually the device under test.

Decide whether you want to work with a VM

VMs provide more flexibility with regards to network configurations. It's much easier to configure multiple adapters, capture traffic from the guest and firewall the guest's network. For the purpose of a "quick start" you should only need a VM if:

  • you can't provide a local DNS server to your device:
    • some test cases require a local DNS server,
    • you can fail to have one if, for example, your network is specifying public DNS servers via DHCP, e.g.,
    • using a NAT'd network adapter for the VM guest will give you a local DNS server.

Setup Your Machine

First follow Setting Up Test Machines for you machine of choice. We recommend using macOS for a quick start.

Run Some Tests

To run a bunch of tests, do the following:

  • pick a config file ($CONFIG_FILE),
    • we recommend starting with configs/auto/,
  • create an output directory somewhere ($OUTPUT),
  • open a shell and execute:
    • ./ -o $OUTPUT -c $CONFIG_FILE.

All tests currently require root (or admin) to run. The suite is designed to facilitate running non-root tests, however currently most tests require root in some way or another. The suite will ask you for root permission when it needs it.

You should ensure that none of the test suite files are owned by root. You should never need to be in a root shell at any time; just rely on the tests asking you for root when they need it.

The test framework will output lots of information to the console. The default log level is INFO and should be sufficient for quick start. However, if you wish to up the level then use the -l parameter.

The types of logging you will see are:

  • INFO: Useful information about the progress of the test.
  • WARNING: No fatal issues. Shouldn't require action for quick start.
  • ERROR: Fatal test failures. These will either be due to explicit failure of a test assertion or due to the test framework throwing an exception.
  • DESCRIBE: Each describe output specifies a repro step for the test. When put together they should read like a bullet point list of how to manually reproduce the steps. These steps are all collected together in a file in the output folder for the test for convenience.
  • INTERACTIVE: This indicates steps which require manual interaction. The test will display some information about what steps to take and pause whilst those steps are taken, e.g.
2017-11-17 08:57:41,425 INTERACTIVE:
Connect to the VPN
Press ENTER to continue...

See TODO on how to fully automate tests.

You should see the test framework execute a set of tests and report whether each one succeeded or failed.

We discuss test execution in detail in TODO.

Structure of the Test Suite

Everything here is covered in full detail in Test Suite Overview.

Test Cases

This repo contains many test cases. They can mostly be found in the desktop_local_tests folder. A test case is any Python class which:

  • derives from TestCase,
  • whose name begins with Test.

Test classes must have unique names - you will get an error if they don't.

A test case requires a configuration to run. The configuration specifies:

  • what devices the test will use to run,
  • the configuration of the device,
  • parameters for the test,
    • some tests can execute in different ways when different parameters are passed.

Configurations are passed to the test suite via the -c argument to The value of this argument should be a python file which exposes an attribute TESTS - which should be a list of dictionaries.

Each dictionary is a configuration for a specific test. It tells the test suite to run that test once with the particular settings.

Tests can live in any folder. Extra folders can be specified via the TestRunContext. See TODO for more information.

Test Configs

We discuss test configurations in detail in TODO.

Test Devices

Devices are identified using inventory files. Inventory files can live anywhere. There is an example inventory in the devices directory. Inventory files are python files which expose an attribute DEVICES - which should be a list of dictionaries.

Each dictionary specify a known device in your inventory. This may be a physical device or a VM.

If no device inventory is specified when tests are run, then the only device available will be your local device on which you run the tests suite. This is made available via the localhost device ID.

For the purpose of "quick start" localhost will be adequate and no additional configuration should be necessary, i.e. no device inventory should be needed.

The framework has been designed to be very generic. It caters for test cases which need multiple devices networked together. Device inventories are used to list all currently available devices to the test orchestrator. Some tests may not use the local device at all except for orchestrating the test runs themselves.

We discuss devices in detail in TODO.

The Test Suite

All test suite code is under the xv_leak_tools folder.

Test execution requires a TestRunContext object which is used to parameterize the test framework itself. The wrapper script /tools/ will process command line arguments and ensure that the test suite is passed:

  • a test output directory,
  • a test run context,
  • a list of test configs,
  • a device inventory.

The real entry point for the test suite is in xv_leak_tools/test_execution/, which receives the above objects from

Note that there's an additional higher level wrapper shell script which should be used to run /tools/ This is just a helper script to ensure the suite can be run in a platform agnostic way.

When the test suite runs, it roughly does the following:

  • discovers all available test cases (classes deriving from TestCase),
  • iterating through each test configuration,
  • create an instance of the test case class for the test,
  • finds the devices specified in the test config but looking through the device inventory,
  • creates "connections" - roughly speaking, SSH connections - to all devices,
  • runs the tests, including:
    • setup and teardown,
    • handling success/failure,
    • handling exceptions.

The test runner will tell you what went wrong and summarise failures. It's similar to most unit testing frameworks, but tailored to leak testing.

Where to go next?

  • Learn about the current test cases: TODO.
  • Learn about building your own configurations: TODO.
  • Learn about creating device inventories: TODO.