Skip to content
/ COLA Public

Source code for Self-Adaptive Driving in Nonstationary Environments through Conjectural Online Lookahead Adaptation

License

Notifications You must be signed in to change notification settings

Panshark/COLA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

COLA

A3C model is original implementated by Palanisamy in Chapter 8, the structure of classifer is based on Rashi Sharma. Both models are deployed in deep.py.

standard-readme compliant

All Contributors

Table of Contents

Background

Powered by deep representation learning, reinforcement learning (RL) provides an end-to-end learning framework capable of solving self-driving (SD) tasks without manual designs. However, time-varying nonstationary environments cause proficient but specialized RL policies to fail at execution time. For example, an RL-based SD policy trained under sunny days does not generalize well to the rainy weather. Even though meta learning enables the RL agent to adapt to new tasks/environments in a sample-efficient way, its offline operation fails to equip the agent with online adaptation ability when facing nonstationary environments. This work proposes an online meta reinforcement learning algorithm based on the conjectural online lookahead adaptation (COLA). COLA determines the online adaptation at every step by maximizing the agent's conjecture of the future performance in a lookahead horizon. Experimental results demonstrate that under dynamically changing weather and lighting conditions, the COLA-based self-adaptive driving outperforms the baseline policies in terms of online adaptability.

Install

. To create a conda environment:

conda create -n your_env_name python=3.8

Activate it and install the requirements in requirements.txt.

conda activate your_env_name
pip install -r requirements.txt

Download Carla 0.9.4, and git clone macad_gym from its github Repository.

  • Fork/Clone the repository to your workspace: git clone && cd macad-gym
  • Create a new conda env named "macad-gym" and install the required packages: conda env create -f conda_env.yml
  • Activate the macad-gym conda python env: source activate macad-gym
  • Install the macad-gym package: pip install -e .
  • Install CARLA PythonAPI: pip install carla==0.9.4
  • Copy three files in ~/COLA/macad_gym to ~/macad-gym/src/macad_gym/carla and replace original files in it.

Usage

A3C Training

python async_a2c_agent.py --env Carla-v0 --model-dir ./trained_models/YOUR_MODEL/ --gpu-id 0

A3C Testing

python async_a2c_agent.py --env Carla-v0 --model-dir ./trained_models/YOUR_MODEL/ --test

Classifier Training

python COLA_rl_agent.py --env Carla-v0 --gpu-id 0

Gradient Buffer Collecting

python gradient_COLA_rl_agent.py --env Carla-v0 --gpu-id 0

COLA Executing

python COLA_gradient_agent.py --env Carla-v0 --test --gpu-id 0

The gradient buffer directory could be modified by line 94 in gradient_COLA_rl_agent.py. Modify the environment/carla_gym/config.json to set "dynamic_on": false. And modify line 151 in ~/macad-gym/src/macad_gym/carla/scenarios.py for collecting gradients from cloudy (1) and rainy (4). Then change back the dynamic_on botton. You can do the COLA Executing now.

References

You can find the full paper here.

Citing:

If you find this work useful in your research, please cite:

@misc{COLA,
  doi = {10.48550/ARXIV.2210.03209},
  
  url = {https://arxiv.org/abs/2210.03209},
  
  author = {Li, Tao and Lei, Haozhe and Zhu, Quanyan},
  
  keywords = {Robotics (cs.RO), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {Self-Adaptive Driving in Nonstationary Environments through Conjectural Online Lookahead Adaptation},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution 4.0 International}
}

Maintainers

@Haozhe Lei.

Contributing

Feel free to dive in! Open an issue or submit PRs.

Standard Readme follows the Contributor Covenant Code of Conduct.

Contributors ✨

Thanks goes to these wonderful people (emoji key):


Haozhe Lei

💻 🔣 📖 🤔 🚧 📆 💬 👀 🎨

Tao Li

🎨 📋 🤔 🔣 🖋 💬

This project follows the all-contributors specification. Contributions of any kind welcome!

License

License: MIT MIT © Haozhe Lei

About

Source code for Self-Adaptive Driving in Nonstationary Environments through Conjectural Online Lookahead Adaptation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages