Skip to content

Test configurations

Guillaume Tucker edited this page Mar 12, 2019 · 14 revisions

KernelCI job configuration is defined in YAML files for builds and tests. This page covers the tests, see also the related page about builds.

All the top-level test configurations are contained in a YAML file: test-configs.yaml. This defines everything that can be run on test platforms. It can be parsed and turned into Python objects using lib.test_configs. The primary use-case for this data is to generate and submit LAVA job definitions as done by lava-v2-jobs-from-api.py.

There are several sections in this file to describe the main following things:

  • file systems are user-space archives with more or less test utilities installed
  • device types are a category of test platforms
  • test plans are a series of tests to be run in one job
  • test configurations are combinations of all of the above

The test configurations are the main entries as they define what actually has to be run. They refer to entries in other sections of the file in order to provide some full combinations, for example to define that an igt test plan using a debian file system should be run on a odroid-xu3 device.

In addition to those sections, there are some filters (whitelists, blacklists) to fine tune which tests get run. For example, some boards are not supported in old stable kernel branches so they'll typically blacklist those to only run tests on newer kernels.

How to add a device type

Each device type has an entry in the device_types dictionary. Here's an example:

  beagle_xm:
    name: 'beagle-xm'
    mach: omap2
    class: arm-dtb
    boot_method: uboot
    dtb: 'omap3-beagle-xm.dtb'
    filters:
      - blacklist: *allmodconfig_filter
      - blacklist: {kernel: ['v3.14']}

The attributes are:

  • name needs to match what LAVA labs use to identify the device type
  • mach is to define a family of SoCs, originally from the arch/arm/mach-* board file names
  • class is used here to define a particular class of devices such as arm-dtb or arm64-dtb
  • arch is to define the CPU architecture following the Linux kernel names ('arm', 'arm64', 'x86_64'...)
  • boot_method is to define how to boot the device (uboot, grub...)
  • dtb is an optional attribute to specify the name of the device tree. By default, device types of the arm-dtb and arm64-dtb class will use name if there is no explicit dtb attribute.
  • filters is an arbitrary list of filters to only run tests with certain configuration combinations. See the Filters section below for more details.
  • flags is an arbitrary list of strings with properties of the device type, to also filter out some job configurations. See the Flags section below for more details.

Note: In this example, the class is architecture-specific so it also defines the arch value which is why it does not appear here.

Flags

Device types can also have a list of flags, for example:

  meson_gxbb_p200:
    name: 'meson-gxbb-p200'
    mach: amlogic
    class: arm64-dtb
    boot_method: uboot
    flags: ['lpae', 'big_endian']
    filters:
      - blacklist: {defconfig: ['allnoconfig', 'allmodconfig']}

This can then be used to filter out some jobs, for example lava-v2-jobs-from-api.py will pass the big_endian flag when the kernel build was for big-endian.

Flags currently in use are:

  • big_endian to tell whether the device can boot big-endian kernels
  • lpae to tell whether the device can boot kernels built with LPAE enabled (Large Physical Address Extension for ARMv7)
  • fastboot to tell if the device can boot with fastboot (stored in jobs meta-data but not actively used)

How to add a test plan

Each test plan has a set of template files in the templates directory and an entry in the test_plans dictionary. For example:

  v4l2:
    rootfs: debian_stretchtests_ramdisk

It's required to specify a rootfs attribute which points to an entry in the file_systems dictionary. The root file system should contain all the tools and test suites required to run the test plan.

When generating LAVA test job definitions, the path to the test template file is created using this default pattern:

'{plan}/{category}-{method}-{protocol}-{rootfs}-{plan}-template.jinja2'

The plan and rootfs values are coming from the test plan definition. The other values are coming from the device and file system type definitions (method is the boot method, protocol is how to download the kernel etc...).

So when adding a test plan, typically there will be one template with only the test steps and other templates inheriting it to add configuration specific steps. For example, still with the v4l2 test plan:

generic-depthcharge-tftp-ramdisk-v4l2-template.jinja2
generic-uboot-tftp-ramdisk-v4l2-template.jinja2
v4l2.jinja2

There are 2 templates to run this test plan on devices that can boot with either U-Boot or Depthcharge. They both include the test steps defined in v4l2.jinja2.

How to add a test configuration

Defining device types, file systems and test plans is necessary but not sufficient to run tests. There also need to be an entry in the test_configs list of dictionaries to essentially bind together a device with a list of test plans. For example, this device type is configured to run several test plans:

  - device_type: rk3288_veyron_jaq
    test_plans: [boot, boot_nfs, sleep, usb, v4l2, igt, cros_ec]

It's possible to have several entries with the same device_type to define special filters. For example, if some tests need to only be run on that device in a specific lab, or with a specific defconfig or tree etc...

Filters

Filters are implemented in lib.test_config and all have a match() method to determine whether a set of configuration options is compatible with the filter definition. It will basically return True if the test should be run, or False otherwise.

Configuration filters use the following parameters as provided by lava-v2-jobs-from-api.py:

  • arch is the CPU architecture name
  • defconfig is the full defconfig name
  • kernel is the full kernel version name
  • lab is the lab name

There are several types of filters:

  • whitelist to only run a test if all the filter conditions are met
  - whitelist: {defconfig: ['bcm2835_defconfig']}

The test will only be run if bcm2835_defconfig is in the defconfig name.

  • blacklist to not run a test if any of the filter conditions is met
  - blacklist: {lab: ['lab-baylibre']}

The test will not be run if lab-baylibre is in the lab name.

  • combination to only run a test if a given set of values is present in the filter conditions
  - combination: &arch_defconfig_filter
      keys: ['arch', 'defconfig']
      values:
        - ['arm', 'multi_v7_defconfig']
        - ['arm64', 'defconfig']
        - ['x86', 'x86_64_defconfig']

The test will only be run if the provided arch and defconfig pair exactly matches one of the 3 defined value combinations.

Default filters

In order to avoid duplicating or referencing the same filters everywhere, there are some default filters which apply to all test configurations. Any filter explicitly defined will take precedence over the default ones.

  • test_plan_default_filters

This filter definition acts as the default for all test plans. It's essentially there to only test the relevant defconfig for each arch (i.e. multi_v7_defconfig on arm, defconfig on arm64 etc...).

  • device_default_filters

Similarly, this defines the default filters for all the devices. Typically it disables things that can't be run on most devices, such as the allmodconfig kernels as they are too large to boot from a ramdisk with all the modules.

How to add a file system

File systems are all the user-space files. They are required to contain the necessary dependencies in order to run the test plans associated with them.

Each file system has a type to define the base URL and architecture names, for example:

  buildroot:
    url: 'http://storage.kernelci.org/images/rootfs/buildroot/kci-2018.05'
    arch_map:
      arm64be: {arch: arm64, endian: big}
      armeb:   {arch: arm,   endian: big}
      armel:   {arch: arm}
  • url is the base URL where the file systems can be downloaded
  • arch_map is a dictionary to translate kernel architecture names into file system specific names

Then file systems can be defined for each type, with some additional information to work out the full URL of each variant. For example:

  buildroot_ramdisk:
    type: buildroot
    ramdisk: '{arch}/base/rootfs.cpio.gz'

  buildroot_nfs:
    type: buildroot
    nfs: '{arch}/base/rootfs.tar.xz'

These file systems both use the same type. They provide different URLs for different variants (ramdisk and NFS in this case). The URL names such as ramdisk or nfs are arbitrary and used by lava-v2-jobs-from-api.py to populate LAVA job definitions. The {arch} template value will be replaced by one of the entries in the arch_map for the file system type or the regular kernel one.

You can’t perform that action at this time.