Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 

README.md

Crossgap_IL_RL

Flying through a narrow gap using neural network: an end-to-end planning and control approach

Crossgap_IL_RL is the open-soured project of our IROS_2019 paper "Flying through a narrow gap using neural network: an end-to-end planning and control approach" (our preprint version on arxiv, our video on Youtube) , including some of the training codes, pretrain networks, and simulator (based on Airsim).

Introduction: Our project can be divided into two phases,the imitation learning and reinforcement learning. In the first phase, we train our end-to-end policy network by imitating from a tradition pipeline. In the second phase, we fine-tune our policy network using reinforcement learning to improve the network performance. The framework of our systems is shown as follows.

Fine-tuning network using reinforcement-learning:

Our realworld experiments:

Author: Jiarong Lin

1. Prerequisites

1.1 TensorFlow and Pytorch(option)

Follow TensorFLow Installation and Pytorch Installation

1.2 AirSim

Following the tutorial of Microsoft airsim, kindly setup your environment.

1.3 OpenAI-baseline

Our reinforcement-learning is based on OpenAI-baseline platform. However, since we train our network by modifying some of its codes, therefore, our project include the codes of OpenAI-baseline, which is forked form its github.

1.4 Python packages

The following package is needed in this project, you can install the following packages by pip, based on your python's environment settings.

  • numpy (for matrix computing)
  • openCV2
  • transforms3d (for SE3 transformation)
  • pickle

1.5 (Option for realworld experiments) DJI Onboard-SDK

Clone DJI Onboard-SDK, switch to branch 3.3

git clone https://github.com/dji-sdk/Onboard-SDK
git checkout 3.3

Install DJI onboard-SDK from the tutorial here.

2 Examples

2.1 Testing networks

  • Comparison of the trajectory generated from the traditional method and from network.
cd python_scripts/test
python net_vs_tr_and_pl.py
  • Test loading policy network
cd python_scripts/test
python test_policy_net.py

2.2 Cross a narrow gap using model-based approach.

2.3 Imitation-learning

  • Imitation learning of motion-planning.
  • Imitation learning of SE3 geometry controller.

2.4 Reinforment-learning

  • Environment setup
  • Reinforcement-learning

2.5 Real-world experiment.

3. Acknowledgments

Thanks for Luqi.Wang and Fei.Gao, without their contributions, our works can’t be finished as we expected.

4. License

The source code is released under GPLv2 license.

5. Notice

Since I have transferred from the Hong Kong University of Science and Technology (HKUST) to the University of Hong Kong (HKU), and our new lab is under construction, therefore this project is paused for several months. Some of the codes in this project might not be well constructed or well testing. However, we insist on open our code for sharing our discovery, we hope some of our current work can help you. Thank you~

About

No description, website, or topics provided.

Resources

License

Releases

No releases published

Packages

No packages published

Languages