Skip to content
[AAAI 2018] TensorFlow implementation of Action Branching Architectures for Deep Reinforcement Learning.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Action Branching Agents


Action Branching Agents repository provides a set of deep reinforcement learning agents based on the incorporation of the action branching architecture into the existing reinforcement learning algorithms.

Action Branching Architecture


Discrete-action algorithms have been central to numerous recent successes of deep reinforcement learning. However, applying these algorithms to high-dimensional action tasks requires tackling the combinatorial increase of the number of possible actions with the number of action dimensions. This problem is further exacerbated for continuous-action tasks that require fine control of actions via discretization. To address this problem, we propose the action branching architecture, a novel neural architecture featuring a shared network module followed by several network branches, one for each action dimension. This approach achieves a linear increase of the number of network outputs with the number of degrees of freedom by allowing a level of independence for each individual action dimension.

Supported Agents

  • Branching Dueling Q-Network (BDQ) (code, paper)


Branching Dueling Q-Network (BDQ) is a novel agent which is based on the incorporation of the proposed action branching architecture into the Deep Q-Network (DQN) algorithm, as well as adapting a selection of its extensions, Double Q-Learning, Dueling Network Architectures, and Prioritized Experience Replay.

As we show in the paper, BDQ is able to solve numerous continuous control domains via discretization of the action space. Most remarkably, we have shown that BDQ is able to perform well on the Humanoid-v1 domain with a total of 6.5 x 1025 discrete actions.

Reacher3DOF-v0 Reacher4DOF-v0 Reacher5DOF-v0 Reacher6DOF-v0

Reacher-v1 Hopper-v1 Walker2d-v1 Humanoid-v1

Getting Started

You can clone this repository by:

git clone


You can readily train a new model for any continuous control domain compatible with the OpenAI Gym by running the script from the agent's main directory.


Alternatively, you can evaluate a pre-trained model included in the agent's trained_models directory, by running the script from the agent's main directory.


If you use this work, we ask that you use the following BibTeX entry:

  title={Action Branching Architectures for Deep Reinforcement Learning},
  author={Tavakoli, Arash and Pardo, Fabio and Kormushev, Petar},
  booktitle={AAAI Conference on Artificial Intelligence},
  pages = {4131--4138},
You can’t perform that action at this time.