Autoware Mini is a minimalistic Python-based autonomy software. It is built on Python and ROS 1 to make it easy to get started and tinkering. It uses Autoware messages to define the interfaces between the modules, aiming to be compatible with Autoware. Autoware Mini currently works on ROS Noetic (Ubuntu 20.04 and through Conda RoboStack also on many other Linux versions). The software is open-source with a friendly MIT license.
Our goals with the Autoware Mini were:
- easy to get started with --> minimal amount of dependencies
- simple and pedagogical --> simple Python nodes and ROS 1
- easy to implement machine learning based approaches --> Python
It is not production-level software, but aimed for teaching and research. At the same time we have validated the software with a real car in real traffic in the city center of Tartu, Estonia.
The key modules of Autoware Mini are:
- Localization - determines vehicle position and speed. Can be implemented using GNSS, lidar positioning, visual positioning, etc.
- Global planner - given current position and destination determines the global path to the destination. Makes use of Lanelet2 map.
- Obstacle detection - produces detected objects based on lidar, radar or camera readings. Includes tracking and prediction.
- Traffic light detection - produces status for stoplines, if they are green or red. Red stopline is like an obstacle for the local planner.
- Local planner - given the global path and obstacles, plans a local path that avoids obstacles and respects traffic lights.
- Follower - follows the local path given by the local planner, matching target speeds at different points of trajectory.
Here are couple of short videos introducing the Autoware Mini features.
Some of the nodes need NVIDIA GPU, CUDA and cuDNN. At this point we suggest installing both the latest CUDA and CUDA 11.8, which seems to be needed by the ONNX Runtime. Notice that the default setup also runs without GPU.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt-get update sudo apt-get -y install cuda cuda-11-8 libcudnn8
mkdir -p autoware_mini_ws/src cd autoware_mini_ws/src
Clone the repos
git clone https://github.com/UT-ADL/autoware_mini.git # not needed for the simplest planner simulation git clone https://github.com/UT-ADL/vehicle_platform.git # if using Carla simulation git clone --recurse-submodules https://github.com/carla-simulator/ros-bridge.git carla_ros_bridge
Install system dependencies
rosdep update rosdep install --from-paths . --ignore-src -r -y
Install Python dependencies
pip install -r autoware_mini/requirements.txt # only when planning to use GPU based clustering pip install -r autoware_mini/requirements_cuml.txt
Build the workspace
cd .. catkin build
Source the workspace environment
As this needs to be run every time before launching the software, you might want to add something similar to the following line to your
Planner simulation is very lightweight and has the least dependencies. It should be possible to run it on any modern laptop without GPU.
roslaunch autoware_mini start_sim.launch
You should see RViz window with the default map. To start driving you need to give the vehicle initial position with 2D Pose Estimate button and destination using 2D Nav Goal button. Static obstacles can be placed or removed with Publish Point button. Initial position can be changed during movement.
To test planner simulation with real-time traffic light status from Tartu:
roslaunch autoware_mini start_sim.launch tfl_detector:=mqtt
Running the autonomy stack against recorded sensor readings is a convenient way to test the detection nodes. An example bag file can be downloaded from here and it should be saved to the
roslaunch autoware_mini start_bag.launch
The example bag file is launched by default. To launch the stack against any other bag file include
bag_file:=<name of the bag file in data/bags directory> in the command line.
The detection topics in bag are remapped to dummy topic names and new detections are generated by the autonomy stack. By default the
lidar_cluster detection algorithm is used, which works both on CPU and GPU. To use GPU-only neural network based SFA detector include in the command line
roslaunch autoware_mini start_bag.launch detector:=lidar_sfa
detector argument values worth trying are
lidar_sfa_radar_fusion. Notice that blue dots represent lidar detections, red dots represent radar detections and green dots represent fused detections.
Another possible test is to run camera-based traffic light detection against bag:
roslaunch autoware_mini start_bag.launch tfl_detector:=camera
To see the traffic light detections enable Detections > Traffic lights > Left ROI image and Right ROI image in RViz.
Download Carla 0.9.13.
Extract the file with
tar xzvf CARLA_0.9.13.tar.gz. We will call this extracted folder
<CARLA ROOT>directory. This will install the Tartu map. (You can now delete the
Tartu.tar.gzfile from the
Since we will be referring to
<CARLA ROOT>a lot, let's export it as an environment variable. Make sure to replace the path where Carla is extracted.
Now, enter the following command. (NOTE: Here we assume that
CARLA_ROOTwas set from the previous command.)
Note: It will be convenient if the above variables are automatically exported whenever you open a terminal. Putting above exports in
~/.bashrcwill reduce the hassle of exporting everytime.
Install system dependencies:
sudo apt install libomp5
In a new terminal, (assuming enviornment variables are exported) run Carla simulator by entering the following command.
$CARLA_ROOT/CarlaUE4.sh -prefernvidia -quality-level=Low
In a new terminal, (assuming enviornment variables are exported) run the following command. This runs Tartu environment of Carla with minimal sensors and our autonomy stack. The detected objects and traffic light statuses come from Carla directly.
roslaunch autoware_mini start_carla.launch
In RViz enable Simulation > Carla camera view or Carla image view to see the third person view behind the vehicle. Set destination as usual with 2D Nav Goal button.
You can also run full Carla sensor simulation and use actual detection nodes. For example to launch Carla with cluster-based detector:
roslaunch autoware_mini start_carla.launch detector:=lidar_cluster
Or to launch Carla with camera-based traffic light detection.
roslaunch autoware_mini start_carla.launch tfl_detector:=camera
- Clone Scenario Runner to a directory of your choice
git clone email@example.com:carla-simulator/scenario_runner.git
- Install requirements
pip install -r scenario_runner/requirements.txt
- Point environment variable SCENARIO_RUNNER_ROOT to the Scenario Runner location
- Launch Autoware Mini with
use_scenario_runner=trueparameterAt the moment you need to manually set the destination for the ego car.
roslaunch autoware_mini start_carla.launch use_scenario_runner=true
Ensure all dependencies are installed:
sudo apt install -y build-essential libeigen3-dev libjsoncpp-dev libspdlog-dev libcurl4-openssl-dev
Go to the autoware_mini src directory:
Clone the latest Ouster driver repository:
git clone --recurse-submodules https://github.com/ouster-lidar/ouster-ros.git
Move to the Autoware mini catkin workspace:
Build the Ouster driver:
catkin build --cmake-args -DCMAKE_BUILD_TYPE=Release
roslaunch autoware_mini start_lexus.launch