Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Design of experiments for BusODD High Fidelity Simulation #45

Open
1 task
WJaworskiRobotec opened this issue Aug 2, 2022 · 9 comments
Open
1 task
Labels

Comments

@WJaworskiRobotec
Copy link

WJaworskiRobotec commented Aug 2, 2022

Description

To prepare initial set of scenarios to be tested in the Figh Fidelity Simulators (AWSIM, MORAI SIM, CARLA etc.)

Purpose

To validate autoware.universe in more realistic enviornement.

Definition of Done

  • list of scenarios created and shared with ODD WG
@YoshinoriTsutake
Copy link

YoshinoriTsutake commented Aug 3, 2022

In my opinion, at first it would be better to know something like

  • where Autoware is supposed to run in the map .
  • how the map / route is
    • how many lanes
    • how long
    • how large
    • what road objects ( traffic lights, pedestrian crossing )

And it could be a test item to make Autoware move on the routes without NPCs in a simulator to make sure mainly whether localization would work well or not.

Then, additionally, to find other test items , it would be better to know

  • what kind of objects or NPCs are supposed to be in / around the route where Autoware moves
  • how NPCs are supposed to behave there

And then, I think we would be able to start to discuss what scenarios could be tried out with AWSIM or MORAI SIM

@WJaworskiRobotec
Copy link
Author

ODD WG scenarios

@WJaworskiRobotec
Copy link
Author

WJaworskiRobotec commented Aug 4, 2022

Components to be tested :

1. Localization

  • Launched only localization stack of Autoware: localization launch
  • Ground truth published by the simulator
  • Vehicle controlled manually (or with some simple path following script) on the predefined paths
  • Localization accuracy calculated using localization evaluator

2. Perception

  • Launched only perception stack of Autoware perception launch
  • Ground truth about visible objects published by the simulator (semantic segmentation to be added in the future)
  • Ego vehicle is static
  • Several scenarios with NPCs spawned around the ego vehicle - both static and dynamic.
  • Perception algorithms accuracy calculated using perception evaluators (in progress)

3. End-to-End testing

  • Launched Localization, Perception, Planning and Control from Autoware
  • Predefined paths + NPCs behavior
  • Calculated metrics using all available evaluators

@sglee-morai
Copy link

I'd like to discuss the format first.

  • How to describe test cases.
  • But more specifically, how to describe a scenario.

However, to prevent the discussion from going too much complicated, I didn't write something that's rather obvious. e.g. ego vehicle model = ITRI bus model in each simulator.

Elements to Define a Test Case

1) Ego vehicle specification

  • What sensors are attached to the ego vehicle model
  • What features of Autoware are enabled (How the ego vehicle is controlled)

2) Test Sceanrio

  • Will be discussed in detail in the following section

3) Evaluation method

  • evaluation metric

@sglee-morai
Copy link

Elements to Define a Test Scenario

1) Ego vehicle routes

2) NPC vehicles & pedestrians

  • Random NPCs
    • Average No. of NPCs around the ego vehicle
    • NPC behavior options
  • Event-driven NPCs
    • Each NPC's route, behavior, etc.

3) Static Obstacles

  • Which static objects are placed & where each one is placed (transform)

4) Time & Weather

  • Time
  • Weather Conditions

@sglee-morai
Copy link

sglee-morai commented Aug 8, 2022

I'd like to give an example set of test scenarios for localization node testing, which @WJaworskiRobotec mentioned in the first part of his comment.

Please let me know whether this format is good enough or not, and of course, let's talk about the contents of the scenario as well.

Test Scenario Example for Localization Node Testing

1) Ego vehicle routes

(2 options in total)
1.1) ITRI Route 01
1.2) ITRI Route 02

ITRI Campus Route 01
image

ITRI Campus Route 02
image

2) NPC vehicles & pedestrians

2.1) No NPC. (Only the Ego vehicle is moving around)

2.2) Low Density

  • Random NPCs
    • 5 vehicles around the ego vehicle on average. Including parked vehicles.
    • No pedestrians.
    • NPC behavior options (can be discussed later)
  • Event-driven NPCs
    • No vehicle nor pedestrians

2.3) High Density

  • Random NPCs
    • 15 vehicles around the ego vehicle on average. Including parked vehicles.
    • No pedestrians.
    • NPC behavior options (can be discussed later)
  • Event-driven NPCs
    • No vehicle nor pedestrians

NOTE: how to define around the ego vehicle: can be discussed later)

3) Static Obstacles

3.1) No Static Obstacles

4) Time & Weather

4.1) Day & Sunny Weather

Total Cases: 2 x 3 x 1 x 1 = 6

@WJaworskiRobotec
Copy link
Author

As decided during the Simulation WG for now for E2E testing we will use slightly extended scenarios prepared by Sugwan. We will add several configurations of each traffic level.

Next year, the AWSIM will support OpenSCENARIO format and then we will execute the ODD WG scenarios as well.

Exact definition of scenarios for ITRI and ISUZU demonstrations of Bus ODD will be described in the seperate issues.

@WJaworskiRobotec
Copy link
Author

Components of AD stack to evaluate :

  • Perception (object detection)
  • Localization
  • Planning
  • Control

To properly assess the quality of AD stack components, the predefined E2E scenarios needs to cover variety of road situations

Road Situations:

  • Following another vehicle
  • Passing parked vehicles
  • Meeting the vehicle driving in the other directions
  • Intersection scenarios

Image
Image
Image
Image

To get the valuable output from the experiments, that will enable comparison of different versions of Autoware, the following metrics should be calculated and report from each scenario should be created.

Metrics:

  • Overall result
    • T/F if vehicle managed to complete the scenario
  • Planning/Control modules
    • Deviation metric
    • Obstacle distance
    • Time to collision
    • Trajectory metrics
  • Perception module:
    • Traffic light detection accuracy
    • Lidar segmentation
    • Lidar object detection / classification
    • Camera object detection / classification
  • Localization module:
    • Localization accuracy metric

Due to the current limitations of AWSIM (No GT data for perception, no support for OpenSCENARIO), the following metrics can be calculated:

  • Overall result
    • T/F if vehicle managed to complete the scenario
  • Planning/Control modules
    • Deviation metric
    • Obstacle distance
    • Time to collision
    • Trajectory metrics
  • Localization module:
    • Localization accuracy metric

It’s not possible to create exact scenarios defining road situations. Because of that, the idea is to use Random Traffic feature to create scenarios covering some of the road situations. The most convenient way to do that would be to create one binary with controllable parameters of Random Traffic from the outside of the simulations (seed, number of NPCs) to play around with scenarios easily. Then we could create several configurations, and evaluate Autoware against a set of scenarios like that:

  • Random Traffic Low 1 (5 vehicles)
  • Random Traffic Low 2 (5 vehicles)
  • Random Traffic Medium 1 (15 vehicles)
  • Random Traffic Medium 2 (15 vehicles)
  • Random Traffic High 1 (40 vehicles)
  • Random Traffic High 2 (40 vehicles)

All executed against both Path 1 and Path 2 of Ego Vehicle :

Image
Image

@stale
Copy link

stale bot commented Dec 10, 2022

This pull request has been automatically marked as stale because it has not had recent activity.

@stale stale bot added the stale label Dec 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

No branches or pull requests

3 participants