Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Survival Game in MineCraft #6

Open
ZhenxiangWang opened this issue Mar 27, 2018 · 1 comment

Comments

@ZhenxiangWang
Copy link
Collaborator

ZhenxiangWang commented Mar 27, 2018

The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Lunar Lander

TITLE

The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Survival Game in MineCraft.

GAME

The game was found in mob_fun.py. It is a demo of mob_spawner block - creates an arena, lines it with mob spawners of a given type, and then tries to keep an agent alive.
Mob spawners will continue to appear.

The agent will lose health if it is hit by the mob spawners. If the health value drops to 0 then the game ends. There are some apples distributed randomly in the arena, and the agent will get scores when eating apples. The purpose of our agent is to try to survive and get more scores. We may change some of its rules to be more suitable for our experiments.

HYPOTHESIS

  1. Using Double DQN, Prioritized Replay and Dueling DQN can significantly improve scores and shorten agent training time compared with using Natural DQN.
  2. If we combine value-based reinforcement learning algorithm with policy-based reinforcement learning algorithm on our AI agent, then it can get higher scores in less time than using either algorithm alone.

INDEPENDENT VARIABLE

Reinforcement learning algorithms used to train the agent

LEVELS OF INDEPENDENT VARIABLE AND NUMBERS OF REPEATED TRIALS

Simple Rules (Control) DQN Double DQN Prioritized Replay Dueling DQN Policy Gradient Actor Criti
3 times 3 times 3 times 3 times 3 times 3 times 3 times

DEPENDENT VARIABLE AND HOW MEASURED

  1. The score that the agent gets at the end of the game, measured by the number of eaten apples.
  2. Agent training time, measured by minute.

CONSTANTS

  1. All agents are trained in games with the same size
  2. All agents are trained in games with the same rules and scoring conditions
  3. Game states are fully observable for all agents
  4. All agents are trained and compared using the same computing resources

Reference

  1. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.
  2. Osband, I., Blundell, C., Pritzel, A., & Van Roy, B. (2016). Deep exploration via bootstrapped DQN. In Advances in neural information processing systems (pp. 4026-4034).
  3. Ontanón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., & Preuss, M. (2013). A survey of real-time strategy game ai research and competition in starcraft. IEEE Transactions on Computational Intelligence and AI in games, 5(4), 293-311.
  4. Sutton, R. S., McAllester, D. A., Singh, S. P., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems (pp. 1057-1063).
@ZhenxiangWang
Copy link
Collaborator Author

training time
scores

@ZhenxiangWang ZhenxiangWang changed the title AI for Survival Game of MineCraft The Effect of Different Reinforcement Learning Algorithms on the Performance of AI for Survival Game in MineCraft Mar 31, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant