The goal of this project is to attack end-to-end self-driving models using physically realizable adversaries.
Target Objective | Conceptual Overview | Example |
---|---|---|
Collision Attack | ||
Hijacking Attack |
- Ubuntu 16.04
- Dedicated GPU with relevant CUDA drivers
- Docker-CE (for docker method)
Note: We highly recommend you use the dockerized version of our repository, due to being system independent. Furthermore, it would not affect the packages on your system.
- Clone the AdverseDrive repository
git clone https://github.com/xz-group/AdverseDrive
- Export Carla paths to
PYTHONPATH
source export_paths.sh
- Install the required Python packages
pip3 install -r requirements.txt
- Download the modified version of the Carla simulator[1], carla-adversedrive.tar.gz. Extract the contents of the directory and navigate into the extracted directory.
tar xvzf carla-adversedrive.tar.gz
cd carla-adverserdrive
- Run the Carla simulator on a terminal
./CarlaUE4.sh -windowed -ResX=800 -ResY=600
This starts Carla as a server on port 2000. Give it about 10-30 seconds to start up depending on your system.
- On a new terminal, start a python HTTP server. This allows the Carla simulator to read the generated attack images and load it onto Carla
sh run_adv_server.sh
Note: This requires port 8000 to be free.
- On another new terminal, run the infraction objective python script
python3 start_infraction_experiments.py
Note: the Jupyter notebook version of this script, called start_infraction_experiments.ipynb
describes each step in detail. It is recommended to use that while starting out with this repository. Use jupyter notebook
to start a jupyter server in this directory.
- The above steps sets up an experiment defined by the experiment parameters in
config/infraction_parameters.json
, including the Carla town being used, the task (straight, turn-left, turn-right), different scenes, the port number being used by Carla and Bayesian optimizer[3] parameters. - Runs the
baseline scenario
where the Carla Imitation Learning[2] (IL) agent drives a vehicle from point A to point B as defined by the experiment scene and task. It returns a metric from the run (eg: sum of infraction for each frame). The baseline scenario is when there is no attack. - The Bayesian Optimizer suggests parameters for the attack, based on the returned metric (which serves as the objective function that we are trying to maximize), the attack is generated by
adversary_generator.py
and placed inadversary/adversary_{town_name}.png
. - Carla reads the adversary image over the HTTP server and places in on pre-determined locations within the road.
- The IL model again runs through this
attack scenario
and returns a metric. - Steps 3-5 are repeated for a set number of experiments, in which successful attacks would be found.
It is expected that you have some experience with dockers, and have installed and tested your installation to ensure you have GPU access via docker containers. A quick way to test it is by running:
# docker >= 19.03
docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
# docker < 19.03 (requires nvidia-docker2)
docker run nvidia/cuda:9.0-base --runtime=nvidia nvidia-smi
And you should get a standard nvidia-smi
output.
- Clone the AdverseDrive repo
git clone https://github.com/xz-group/AdverseDrive
- Pull the modified version of the Carla simulator:
docker pull xzgroup/carla:latest
- Pull the
AdverseDrive
docker containing all the prerequisite packages for running experiments (also server-friendly)
docker pull xzgroup/adversedrive:latest
- Run the our dockerized Carla simulator on a terminal
sh run_carla_docker.sh
This starts Carla as a server on port 2000. Give it about 10-30 seconds to start up depending on your system.
- On a new terminal, start a python HTTP server. This allows the Carla simulator to read the generated attack images and load it onto Carla
sh run_adv_server.sh
Note: This requires port 8000 to be free.
- On another new terminal, run the
xzgroup/adversedrive
docker
sh run_docker.sh
- Run the infraction objective python script
python3 start_infraction_experiments.py
- Carla Simulator: https://github.com/carla-simulator/carla
- Imitation Learning: https://github.com/carla-simulator/imitation-learning
- Bayesian Optimization: https://github.com/fmfn/BayesianOptimization
If you use our work, kindly cite us using the following:
@misc{boloor2019,
title={Attacking Vision-based Perception in End-to-End Autonomous Driving Models},
author={Adith Boloor and Karthik Garimella and Xin He and
Christopher Gill and Yevgeniy Vorobeychik and Xuan Zhang},
year={2019},
eprint={1910.01907},
archivePrefix={arXiv},
primaryClass={cs.LG}
}