Skip to content

SBFT Tool Competion on Testing Unmanned Aerial Vehicles

License

Notifications You must be signed in to change notification settings

zohdit/UAV-Testing-Competition

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CPS-UAV: Unmanned Aerial Vehicle Testing Competition

Important Updates (21.11.2023)

  • Submission requirements and guideline is now available. Check Here.
  • Make sure to review and integrate the recent updates to the code samples into your code where needed.
  • Make sure to pull/install the latest version of Aerialist.

Unmanned Aerial Vehicles (UAVs) equipped with onboard cameras and various sensors have already demonstrated the possibility of autonomous flights in real environments, leading to great interest in various application scenarios: crop monitoring, surveillance, medical and food delivery.

Over the years, support for UAV developers has increased with open-access projects for software and hardware such as the autopilot support provided by PX4 and Ardupilot. However, despite the necessity of systematically testing such complex and automated systems to ensure their safe operation in real-world environments, there has been relatively limited investment in this direction so far.

The UAV Testing Competition organized by the Search-Based and Fuzz Testing (SBFT) workshop is an initiative designed to inspire and encourage the Software Testing Community to direct their attention toward UAVs as a rapidly emerging and crucial domain.

Table of Contents

Overview

Multiple studies have proven that many UAV bugs can be potentially detected before field tests if proper simulation-based testing is in place. This suggests the need for further research on setting up simulation environments that test UAVs' behavior in diverse, complex, and realistic scenarios.

However, the engineering complexity of UAVs and their test environments, and the difficulty of setting up realistic-enough simulation environments that can capture the same bugs as physical tests represent relevant obstacles.

In the first edition of the UAV Testing Competition, we aim to provide software testing researchers with a simple platform to facilitate their onboarding in the UAV domain. Using the provided platform and case study, the goal is to use search-based techniques for generating challenging test cases for autonomous vision-based UAV navigation systems.

  • The Software Under Test is PX4-Avoidance, a vision-based autonomous obstacle avoidance system developed on top of PX4-Autopilot.

  • We create challenging scenarios for PX4-Avoidance by placing static obstacles on the UAV's path.

  • The ultimate goal is to find some specific obstacle configurations (size, position) that could lead to a crash or unsafe flight by the autopilot, as seen in the below image.

sample test plot

Goal

In the tool competition, each participant presents a robust test generator capable of generating a diverse set of tests. The primary objective is to find potential vulnerabilities within the PX4 obstacle avoidance system. This involves manipulating obstacle sizes and placements within the test environment, with the ultimate goal of either causing the UAV to crash or significantly diverting it from its intended path.

The goals of the tool competition are as below:

  • The objective is to develop a test generator capable of creating diverse and effective tests to uncover vulnerabilities within the PX4 avoidance system.
  • The generated test will be for a predefined UAV firmware, model, and mission.
  • The generated test will create a challenging environment by manipulating object sizes and placements to cause either UAV crashes or significant deviations in its flight path.

The effectiveness of these generated tests will be measured based on the number of failed cases and the diversity of test scenarios. The goal is to identify potential system weaknesses comprehensively.

Competition Platform

Software Under Test

  • PX4 : PX4 is an open-source autopilot software stack primarily used for controlling unmanned aerial vehicles(UAVs). It provides a flexible and customizable platform for designing and controlling the drones, including capabilities for navigation, stabilization, and mission planning. PX4 is compatible with various hardware platforms and is widely used in both academic and commercial drone applications. It supports a range of UAV types, from small quadcopters to fixed-wing aircraft and even VTOL (Vertical Take-Off and Landing) vehicles. Developers and researchers often use PX4 as a foundation for creating and testing new drone capabilities and applications.

  • PX4 Avoidance : PX4 Avoidance is a software module in the PX4 Autopilot ecosystem that provides obstacle detection and avoidance capabilities. PX4 Avoidance uses various sensors and algorithms to help UAVs navigate and avoid obstacles in their environment. It allows UAVs to detect obstacles such as buildings, trees, and other objects in their flight path and make adjustments to their flight path to avoid collisions or navigate around these obstacles safely. Overall, PX4 Avoidance is a critical component for ensuring the safe and reliable operation of UAVs in complex and dynamic environments.

  • PX4 Flight Logs: PX4 flight logs are comprehensive records of a drone's operational data and telemetry during its flights. These logs include detailed information such as GPS coordinates, altitude, motor RPM, sensor data, and flight modes. They are invaluable for troubleshooting, performance analysis, and debugging, as they allow developers and operators to examine precisely what happened during a flight, identify potential issues, and fine-tune the drone's behavior and systems for optimal performance and safety. These logs are stored in a standardized format (.ulg), making them compatible with various analysis and visualization tools for in-depth technical examination. Here is a sample flight log.

  • Gazebo : Gazebo is an open-source 3D robot simulator that provides a realistic and physics-based simulation environment for testing and validating UAVs and robotic systems. PX4 often utilizes Gazebo as a simulation platform to create virtual environments where developers and researchers can test UAVs without the need for physical hardware. This allows for various scenarios, including flight testing, obstacle avoidance, and mission planning, to be tested in a safe and controlled virtual environment. Gazebo simulates the physical properties and dynamics of the UAV and its surroundings, including sensors, wind, and terrain. It is a valuable tool for both software and hardware development, as it enables testing and debugging of UAV control algorithms and systems before deploying them to actual UAV hardware.

Aerialist: UAV Test Bench

Aerialist (unmanned AERIAL vehIcle teST bench) is a novel test bench for UAV software that automates all the necessary UAV testing steps: setting up the test environment, building and running the UAV firmware code, configuring the simulator with the simulated world properties, connecting the simulated UAV to the firmware and applying proper UAV configurations at startup, scheduling and executing runtime commands, monitoring the UAV at runtime for any issues, and extracting the flight log file after the test completion.

With Aerialist, we aim to provide the competition participants with an easy platform to automate tests on the simulated UAVs, allowing them to do experiments required to overcome the UAV simulation-based testing challenges. The Test Generators submited to the competition are required to build on top of Aerialist to simplify the evaluation process. Check Aeialist's Documentation for more details on the usage.

Test Generation

Competition participants are expected to submit a Test Generator that generates challenging test cases for a given case study.

UAV Test Cases

Aerialist models a UAV test case with the following set of test properties and uses a YAML structure to describe the test.

  • Drone: Software configurations of the UAV model, including all Autopilot parameters and configuration files (e.g., mission plan) required to set up the drone for the test.

  • Environment: Simulation settings such as the used simulator, physics of the simulated UAV, simulation world (e.g., surface material, UAV’s initial position), surrounding objects (e.g., obstacles size, position), weather conditions (e.g., wind, lighting), etc.

  • Commands: Timestamped external commands from the ground control station (GCS) or the remote controller (RC) to the UAV during the flight (e.g., change flight mode, go in a specific direction, enter mission mode).

  • Expectation (optional): a time series of certain sensor readings that the test flights are expected to follow closely.

Using a predefined test-description yaml file is the easiest way to define the test case.

# mission2.yaml
drone:
  port: ros #{sitl, ros, cf}
  params_file: case_studies/mission-params.csv 
  mission_file: case_studies/mission2.plan

simulation:
  simulator: ros #{gazebo, jmavsim, ros} 
  speed: 1
  headless: true
  # obstacles:
  # - size:
  #     l: 10
  #     w: 5
  #     h: 20
  #   position:
  #     x: 10
  #     y: 20
  #     z: 0
  #     r: 0
  # - size:
  #     l: 10
  #     w: 5
  #     h: 20
  #   position:
  #     x: -10
  #     y: 20
  #     z: 0
  #     r: 0

test:
  commands_file: case_studies/mission-commands.csv

case study 2

The competition Test Generators are only allowed to manipulate the obstacles in the environment. For simplicity, we only consider box-shaped obstacles. An obstacle is defined by its size (length, width, height) and position in the simulation environment (x,y,z) in meters and its rotation angle in degrees.

# mission2.yaml
simulation:
  simulator: ros #{gazebo, jmavsim, ros} 
  speed: 1
  headless: true
  obstacles:
  - size:
      l: 10
      w: 5
      h: 20
    position:
      x: 10
      y: 20
      z: 0
      r: 0
  - size:
      l: 10
      w: 5
      h: 20
    position:
      x: -10
      y: 20
      z: 0
      r: 0

The below image shows the drone flight trajectory during the execution of the above test case:

case study 2

Case Studies

The input to the test generators are some simple test cases, without any obstacles in the simulation environment. These case studies include a predefined flight mission, relevant drone configurations, simulation configurations, and relevant commands to start the autonomous mission.

The test generators are then expected to place obstacles in the simulation environment, inside a predefined area.

There have been a few sample case studies (similar to the above scenarios) provided to help you develop your test generators. Some other similar case studies will be used for evaluation.

UAV Test Generators

Given a simulated test case configuration for autonomous flight (above-mentioned case studies), the goal is to generate a more challenging simulated test case by introducing obstacles to the environment, to force the UAV to get too close to the obstacles (\ i.e. having a distance below a predefined safety threshold) while still completing the mission. This will create a risky environment for the UAV to operate the mission.

Participants are expected to use search-based methods to find challenging obstacle configurations. The generated test cases (following the Aerialist test case modeling) should respect the following considerations:

  • The drone is expected to safely avoid all the obstacles on its path. This includes maintaining a safe distance from the surrounding obstacles and not crashing into them.

    • A test execution is considered a Hard Fail if there is a collision with any of the obstacles in the environment.
    • A test execution is considered a Soft Fail if the drone does not maintain a minimum safe distance of 1.5 m to the surrounding obstacles.
  • The obstacle configurations are expected to keep the flight mission physically feasible.

    • The test cases that make it impossible for the UAV to find its path (e.g., creating a long wall among the drone path) while there is no hard or soft fail are considered Invalid.
  • All the obstacles are expected to fit in a given rectangular area as stated in the case study.

  • There can be up to 4 obstacles in each test case.

  • Obstacles must not collide with each other, and they must be placed directly on the ground (z=0).

A sample test generator using a random approach is documented and made available here

Competition Guideline

Submission

Follow the Submission Guideline, prepare your code as explained and send it to the organization committee.

  • Submission Deadline: By the end of November 2023

Evaluation

The efficacy of the test generators will be assessed based on two crucial metrics: the number of failed cases and the diversity of the test scenarios. The first metric, the number of failed cases, serves as a straightforward indicator of the test's ability to uncover system weaknesses. A higher number of failures signifies a more effective test generator in this context.

However, it is equally essential to consider the diversity of test cases. Diversifying the test scenarios is critical as it helps ensure that a wide spectrum of potential vulnerabilities is explored. The more varied the test cases, the greater the likelihood of identifying hidden flaws and edge cases that might otherwise go undetected.

The following metrics will be used to evaluate the tests generated by the tools developed:

  • Fault Detection (Test Failure): The test cases will be evaluated for fault detection.
  • Testing Budget: A testing budget will be allocated for generating the test cases.
  • Test Diversity: Diversity in the test will be valued more.
  • Simplicity: Faults found in less complicated environments (fewer obstacles) will be valued more.

References

If you use this tool in your research, please cite the following papers:

  • Sajad Khatiri, Sebastiano Panichella, and Paolo Tonella, "Simulation-based Test Case Generation for Unmanned Aerial Vehicles in the Neighborhood of Real Flights," In 2023 IEEE 16th International Conference on Software Testing, Verification and Validation (ICST)
@inproceedings{khatiri2023simulation,
  title={Simulation-based test case generation for unmanned aerial vehicles in the neighborhood of real flights},
  author={Khatiri, Sajad and Panichella, Sebastiano and Tonella, Paolo},
  booktitle={2023 16th IEEE International Conference on Software Testing, Verification and Validation (ICST)},
  year={2023},
}

License

The software we developed is distributed under MIT license. See the license file.

Contacts

Feel free to use the Discussions section to ask your questions and look for answers.

You can also contact us directly using email:

About

SBFT Tool Competion on Testing Unmanned Aerial Vehicles

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 98.1%
  • Dockerfile 1.9%