Skip to content

neunms/Reinforcement-learning-on-graphs-A-survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 

Repository files navigation

Travis GitHub Repo stars GitHub last commit GitHub last commit

Published by IEEE Transactions on Emerging Topics in Computational Intelligence

This collection of papers can be used to summarize research about graph reinforcement learning for the convenience of researchers.

Mingshuo Nie,

Northeastern University, China.

Email: niemingshuo@stumail.neu.edu.cn

Abstract

Graph mining tasks arise from many different application domains, including social networks, biological networks, transportation, and E-commerce, which have been receiving great attention from the theoretical and algorithmic design communities in recent years, and there has been some pioneering work employing the research-rich Reinforcement Learning (RL) techniques to address graph mining tasks. However, these fusion works are dispersed in different research domains, which makes them difficult to compare. In this survey, we provide a comprehensive overview of these fusion works and generalize these works to Graph Reinforcement Learning (GRL) as a unified formulation. We further discuss the applications of GRL methods across various domains, and simultaneously propose the key challenges and advantages of integrating graph mining and RL methods. Furthermore, we propose important directions and challenges to be solved in the future. To our knowledge, this is the latest work on a comprehensive survey of GRL, this work provides a global view and a learning resource for scholars. Based on our review, we create a collection of papers for both interested scholars who want to enter this rapidly developing domain and experts who would like to compare GRL methods.

Citation

If you find this work useful in your research, please consider citing:


@article{nie2023reinforcement,
  author={Nie, Mingshuo and Chen, Dongming and Wang, Dongqi},
  journal={IEEE Transactions on Emerging Topics in Computational Intelligence}, 
  title={Reinforcement Learning on Graphs: A Survey}, 
  year={2023},
  volume={7},
  number={4},
  pages={1065-1082},
  doi={10.1109/TETCI.2022.3222545}
}


Awesome Graph Reinforcement Learning

Star History

Star History Chart

Quick Look

The papers in the collection are categorized:

RL method

All the reinforcement learning methods used in the literature are as follows.

RL method Abbr. Year Paper
Markov Decision Process MDP \ \
Monte Carlo Tree Search MCTS \ \
Q-learning Q-learning 1992 Paper
REINFORCE REINFORCE 1992 Paper
Actor-Critic AC 1999 Paper
Bernoulli Multi-armed Bandit BMAB 2005 Paper
Neural Fitted Q-iteration NFQI 2005 Paper
Deep Q-learning Network DQN 2015 Paper
Double DQN DDQN 2016 Paper
Advantage Actor-Critic A2C 2016 Paper
Asynchronous Advantage Actor-Critic A3C 2016 Paper
Deep Deterministic Policy Gradient DDPG 2016 Paper
Proximal Policy Optimization PPO 2017 Paper
Cascaded DQN CDQN 2019 Paper

About

This collection of papers can be used to summarize research about graph reinforcement learning for the convenience of researchers.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published