Skip to content

Nov05/udacity-deep-reinforcement-learning

Β 
Β 

Repository files navigation

πŸ‘‰ Setup Python environment for the repo


πŸ‘‰ Unity enviroment Tennis vector game (Project Submission)

βœ… setup Python environment


πŸ‘‰ AlphaZero


πŸ‘‰ Unity enviroment Reacher-v2 vector game (Project Submission)

βœ… setup Python environment

βœ… entry points

βœ… Result: A DDPG model was trained in one Unity-Reacher-v2 environment with 1 agent (1 robot arm) for 155 episodes, then evaluated in 3 environments (each with 1 agent) parallelly for 150 consecutive episodes and got an average score of 33.92(0.26) (0.26 is the standard deviation of scores in different envs). also the trained model is tested to control 20 agents in 4 envs parallelly and got a score of 34.24(0.10).

  • evaluation with graphics

    Notes:

    • the 4 envs and each its own 1 (or 20) agents above were controlled by one single DDPG model at the same time.
    • observation dimension [num_envs, num_agents (per env), state_size] will be converted to [num_envs*num_agents, state_size] to pass through the neural networks.
    • during training, action dimension will be [mini_batch_size (replay batch), action_size];
      during evaluation, the local network will ouput actions with dimension [num_envs*num_agents, action_size], and it will be converted to [num_envs, num_agents, action_size] to step the envs.
  • train and eval scores

  • monitor train-eval scores with tensorboard

  • DDPG neural networks architecture

  • evaluation result (in 3 envs for 150 consecutive episodes)

  • saved files (check the folder)

    • trained model
    • train log (human readable):
      you can find all the configuration including training hyperparameters, network architecture, train and eval scores, here.
    • tf_log (tensorflow log, will be read by the plot modules)
    • eval log (human readable)

βœ… major efforts in coding

  • all the code is integrated with ShangtongZhang's deeprl framework which uses some OpenAI Baselines functionalities.

  • one task can step multiple envs, either with a single process, or with multiple processes. multiple tasks can be stepped synchronously.

  • to enable multiprocessing of Unity environments, the following code has had to be modified.
    in python/unityagents/rpc_communicator.py

    class UnityToExternalServicerImplementation(UnityToExternalServicer):
        # parent_conn, child_conn = Pipe() ## removed by nov05
    ...
    class RpcCommunicator(Communicator):
        def initialize(self, inputs: UnityInput) -> UnityOutput: # type: ignore
            try:
                self.unity_to_external = UnityToExternalServicerImplementation()
                self.unity_to_external.parent_conn, self.unity_to_external.child_conn = Pipe() ## added by nov05
  • Task UML diagram

    Agent UML diagram

  • launch multiple Unity environments parallelly (not used in the project) from an executable file (using Python Subprocess and Multiprocess, without MLAgents)

βœ… reference



πŸ‘‰ OpenAI Gym's Atari Pong pixel game



πŸ‘‰ Unity ML-Agents Banana Collectors (Project Submission)

  1. For this toy game, two Deep Q-network methods are tried out. Since the observations (states) are simple (not in pixels), convolutional layers are not in use. And the evaluation results confirm that linear layers are sufficient for solving the problem.
    • Double DQN, with 3 linear layers (hidden dims: 256*64, later tried with 64*64)
    • Dueling DQN, with 2 linear layers + 2 split linear layers (hidden dims: 64*64)

β–ͺ️ The Dueling DQN architecture is displayed as below.

Dueling Architecture The green module

β–ͺ️ Since both the advantage and the value stream propagate gradients to the last convolutional layer in the backward pass, we rescale the combined gradient entering the last convolutional layer by 1/√2. This simple heuristic mildly increases stability.

        self.layer1 = nn.Linear(state_size, 64)
        self.layer2 = nn.Linear(64, 64)
        self.layer3_adv = nn.Linear(in_features=64, out_features=action_size) ## advantage
        self.layer3_val = nn.Linear(in_features=64, out_features=1) ## state value

    def forward(self, state):
        x = F.relu(self.layer1(state))
        x = F.relu(self.layer2(x))
        adv, val = self.layer3_adv(x), self.layer3_val(x)
        return (val + adv - adv.mean(1).unsqueeze(1).expand(x.size(0), action_size)) / (2**0.5)

β–ͺ️ In addition, we clip the gradients to have their norm less than or equal to 10. This clipping is not standard practice in deep RL, but common in recurrent network training (Bengio et al., 2013).

        ## clip the gradients
        nn.utils.clip_grad_norm_(self.qnetwork_local.parameters(), 10.)
        nn.utils.clip_grad_norm_(self.qnetwork_target.parameters(), 10.) 
  1. The following picture shows the train and eval scores (rewards) for both architectures. Since it is a toy project, trained models are not formally evaluated. We can roughly see that Dueling DQN slightly performs better with an average score of 17 vs. Double DQN 13 in 10 episodes.

  1. Project artifacts:


πŸ‘‰ Logs

2024-04-10 p2 Unity Reacher v2 submission
2024-03-07 Python code to launch multiple Unity environments parallelly from an executable file
...
2024-02-14 Banana game project submission
2024-02-11 Unity MLAgent Banana env set up
2024-02-10 repo cloned



Deep Reinforcement Learning Nanodegree

Trained Agents

This repository contains material related to Udacity's Deep Reinforcement Learning Nanodegree program.

Table of Contents

Tutorials

The tutorials lead you through implementing various algorithms in reinforcement learning. All of the code is in PyTorch (v0.4) and Python 3.

  • Dynamic Programming: Implement Dynamic Programming algorithms such as Policy Evaluation, Policy Improvement, Policy Iteration, and Value Iteration.
  • Monte Carlo: Implement Monte Carlo methods for prediction and control.
  • Temporal-Difference: Implement Temporal-Difference methods such as Sarsa, Q-Learning, and Expected Sarsa.
  • Discretization: Learn how to discretize continuous state spaces, and solve the Mountain Car environment.
  • Tile Coding: Implement a method for discretizing continuous state spaces that enables better generalization.
  • Deep Q-Network: Explore how to use a Deep Q-Network (DQN) to navigate a space vehicle without crashing.
  • Robotics: Use a C++ API to train reinforcement learning agents from virtual robotic simulation in 3D. (External link)
  • Hill Climbing: Use hill climbing with adaptive noise scaling to balance a pole on a moving cart.
  • Cross-Entropy Method: Use the cross-entropy method to train a car to navigate a steep hill.
  • REINFORCE: Learn how to use Monte Carlo Policy Gradients to solve a classic control task.
  • Proximal Policy Optimization: Explore how to use Proximal Policy Optimization (PPO) to solve a classic reinforcement learning task. (Coming soon!)
  • Deep Deterministic Policy Gradients: Explore how to use Deep Deterministic Policy Gradients (DDPG) with OpenAI Gym environments.
    • Pendulum: Use OpenAI Gym's Pendulum environment.
    • BipedalWalker: Use OpenAI Gym's BipedalWalker environment.
  • Finance: Train an agent to discover optimal trading strategies.

Labs / Projects

The labs and projects can be found below. All of the projects use rich simulation environments from Unity ML-Agents. In the Deep Reinforcement Learning Nanodegree program, you will receive a review of your project. These reviews are meant to give you personalized feedback and to tell you what can be improved in your code.

  • The Taxi Problem: In this lab, you will train a taxi to pick up and drop off passengers.
  • Navigation: In the first project, you will train an agent to collect yellow bananas while avoiding blue bananas.
  • Continuous Control: In the second project, you will train an robotic arm to reach target locations.
  • Collaboration and Competition: In the third project, you will train a pair of agents to play tennis!

Resources

OpenAI Gym Benchmarks

Classic Control

Box2d

Toy Text

Dependencies

To set up your python environment to run the code in this repository, follow the instructions below.

  1. Create (and activate) a new environment with Python 3.6.

    • Linux or Mac:
    conda create --name drlnd python=3.6
    source activate drlnd
    • Windows:
    conda create --name drlnd python=3.6 
    activate drlnd
  2. If running in Windows, ensure you have the "Build Tools for Visual Studio 2019" installed from this site. This article may also be very helpful. This was confirmed to work in Windows 10 Home.

  3. Follow the instructions in this repository to perform a minimal install of OpenAI gym.

    • Next, install the classic control environment group by following the instructions here.
    • Then, install the box2d environment group by following the instructions here.
  4. Clone the repository (if you haven't already!), and navigate to the python/ folder. Then, install several dependencies.

    git clone https://github.com/udacity/deep-reinforcement-learning.git
    cd deep-reinforcement-learning/python
    pip install .
  5. Create an IPython kernel for the drlnd environment.

    python -m ipykernel install --user --name drlnd --display-name "drlnd"
  6. Before running code in a notebook, change the kernel to match the drlnd environment by using the drop-down Kernel menu.

Kernel

Want to learn more?

Come learn with us in the Deep Reinforcement Learning Nanodegree program at Udacity!

About

Repo for the Deep Reinforcement Learning Nanodegree program

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 74.2%
  • Python 15.3%
  • HTML 10.2%
  • Other 0.3%