Skip to content

WLeiiiii/Gym-ATC-Attack-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Navigating the Skies: A Deep Reinforcement Learning Model with Enhanced Safety and Explainability

Introduction

It is the source code of "Navigating the Skies: A Deep Reinforcement Learning Model with Enhanced Safety and Explainability", which presents a novel deep reinforcement learning (DRL) controller to aid conflict resolution for autonomous free flight. To study the safety under adversarial attacks, we additionally propose an adversarial attack strategy that can impose both safety-oriented and efficiency-oriented attacks.

Requirements

In order to install requirements, follow:

pip install -r requirements.txt

Details

Environment

The definition of environment is in envs:

  • envs/SimpleATC_env_global.py is for traditional DQN agent with fixed airways and global perception

  • envs/SimpleATC_env_global_v2.py is for safety-aware DQN(SafeDQN) agent with fixed airways and global perception

  • envs/SimpleATC_env_local.py is for traditional DQN agent with fixed airways and local perception

  • envs/SimpleATC_env_local_v2.py is for safety-aware DQN(SafeDQN) agent with fixed airways and local perception

  • envs/SimpleATC_env_local_x.py is for traditional DQN agent with random airways and local perception

  • envs/SimpleATC_env_local_x_v2.py is for safety-aware DQN(SafeDQN) agent with random airways and local perception

Parameter of the environments can be found in envs/config.py. And here you can change the related parameters according to your own needs.

DQN Agents

By importing different environments in envs, different models can be trained and evaluated in agents:

  • agents/dqn_simple_env is for traditional DQN agent
  • agents/dqn_simple_env_v2 is for safety-aware DQN(SafeDQN) agent
# take traditional DQN as an example

# For training:
python dqn_simple_env.py --train=True --save_path=" "

# For evaluating:
python dqn_simple_env.py --load_path=" "

You can find the DQN structure in models/dqn_model.

Adversarial Attacks

The adversarial attack methods are in attacks:

  • v1 for traditional DQN agent
  • v2 for safety-aware DQN(SafeDQN) agent

Demo

Here, we present the demos of four models under 10 routes scenarios without/with adversarial attacks. For safeDQN and safeDQN-X, we only present the results under safety-oriented attacks.

Without Attack Uniform Attack Strategically-Timed Attack
DQN image image image
DQN-X image image image
safeDQN image image image
safeDQN-X image image image

Collaborator

Lei Wang
Lei Wang

💻
Yuankai Wu
Yuankai Wu

💻

About

Adversarial attack for DRL in ATC

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages