Skip to content

ShAw7ock/Atari_Pong_PPO_DQN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Atari_Pong_PPO/DQN

There are three algorithms in my code for OpenAI Atari Game Pong You can check the main agent algorithms in the directory 'algos' with DQN, DDPG and PPO. But only the DQN seem work finally, DDPG and PPO can not train the agent efficiently.

Update at 2021/01/04

The reason why PPO and DDPG Algorithm can not work at first is that the Convolution Layer is too simple and the features can't make the agent train usefully.

So I change the Convolution Layers of the 'ppo_networks.py' in 'utils' and add the 'ddpg_net_complicate.py' Since the complex convolution layers are used, the PPO Algorithm can work well and train the agent usefully.

But with the difference between DDPG and PPO (DDPG is the deterministic action algorithm but PPO is not, so DDPG will use the complex neural networks frequently to select the deterministic actions), DDPG Algorithm can not train the agent usefully

NOTE!

This code will not update anymore.

Please see ShAw7ock/emdqn_torch to get the DQN codes in Atari Envs.

Thanks for using ShAw7ock's code.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages