Skip to content

Towards Monocular Vision Based Collision Avoidance Using Deep Reinforcement Learning

Notifications You must be signed in to change notification settings

mw9385/Collision-avoidance

Repository files navigation

Collision-avoidance

Towards Monocular Vision Based Collision Avoidance Using Deep Reinforcement Learning You could see the algorithm verification in real environment from here. No distance sensors are used. The paper can be found in here.

충돌회피 1 (2) 충돌회피 2 (2) 충돌회피 3 충돌회피 5

Overall Network Structure

An RGB image from a monocular sensor is converted into a depth image. Then the estimated depth image is used as an input of the Dueling Double Deep Q-Network.

D3QN figure

Depth Estimation

  • Tensorflow version == 1.12.0
  • Depth Estimation model is based on ResNet 50 architecture
  • python file that contains the model architecture is located in models
  • Due to huge size of trained depth estimation model, you have to download the depth estimation model here.

To implement the code, you need

- fcrn.py
- __init__.py
- network.py
- NYU_FCRN-chekpoint

Training Environment in Robot Operating System

  • In our setup, ubuntu 16.04 and ROS kinetic are used
  • Training env file contains the figures of the training map in ROS
  • You could use the training environments in model_editor_models
  • Place editor models in your gazebo_model_repository

Training Environment Setup

1. Spawning the drone for training

Training agent for the drone is hector_qaudrotor. Please take a look at the ROS description and install it. To spawn the training agent for our setup, type the command below:

roslaunch spawn_quadrotor_with_asus_with_laser.launch

To enable motors, type the command below:

rosservice call /enable_motors true

2. Setting the initial position and velocity of the agent

You could change the initial position and velocity in the ENV.py.

  • To change the spawining position of the drone, change the spawn_table in the ENV.py
  • To change the velocity of the drone, change the action_table: (three linear speed, five angular rate)

Training

To train the model python3 D3QN.py. You could change the hyperparameters in the D3QN.py.

Testing

  • Simulation Test: To test the model, please change the trained model's directory and then type python3 Test.py.
  • Real world experiment test: Go to real_world_test file and run the D3QN_test.py.

Citation

@article{kim2022towards,
  title={Towards monocular vision-based autonomous flight through deep reinforcement learning},
  author={Kim, Minwoo and Kim, Jongyun and Jung, Minjae and Oh, Hyondong},
  journal={Expert Systems with Applications},
  volume={198},
  pages={116742},
  year={2022},
  publisher={Elsevier}
}  

About

Towards Monocular Vision Based Collision Avoidance Using Deep Reinforcement Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages