Algorithms | Environment (Name & Goal) | Environment GIF | Plots |
---|---|---|---|
Policy Iteration | Frozen Lake: The player makes moves until they reach the goal or fall in a hole. The lake is slippery (unless disabled) so the player may move perpendicular to the intended direction sometimes. | ![]() ![]() |
- |
Value Iteration | Taxi-v3: The taxi starts at a random location within the grid. The passenger starts at one of the designated pick-up locations. The passenger also has a randomly assigned destination (one of the four designated locations). | ![]() ![]() ![]() |
- |
Monte Carlo Exploring Starts | Blackjack-v1: a card game where the goal is to beat the dealer by obtaining cards that sum to closer to 21 (without going over 21) than the dealer's cards | ![]() |
![]() ![]() |
Sarsa | CliffWalking-v0: Reach goal without falling | ![]() |
![]() |
Q-learning | CliffWalking-v0: Reach goal without falling | ![]() |
![]() |
Expected Sarsa | CliffWalking-v0: Reach goal without falling | ![]() |
![]() |
Double Q-learning | CliffWalking-v0: Reach goal without falling | ![]() |
![]() |
n-step Bootstrapping (TODO) | - | - | - |
Dyna-Q | ShortcutMazeEnv (custom made env): Reach the goal dodging obstacles | ![]() ![]() |
![]() |
Prioritized Sweeping | ShortcutMazeEnv (custom made env): Reach the goal dodging obstacles | ![]() |
![]() ![]() |
Monte-Carlo Policy-Gradient | CartPole-v1: goal is to balance the pole by applying forces in the left and right direction on the cart. | ![]() |
![]() |
REINFORCE with Baseline | CartPole-v1: goal is to balance the pole by applying forces in the left and right direction on the cart. | ![]() |
- |
One-Step Actor-Critic | CartPole-v1: goal is to balance the pole by applying forces in the left and right direction on the cart. | ![]() |
![]() |
Policy Gradient on Continuous Actions (TODO) | - | - | - |
On-policy Control with Approximation (TODO) | - | - | - |
Off-policy Methods with Approximation (TODO) | - | - | - |
Eligibility Traces (TODO) | - | - | - |
Year | Paper | Environment (Name & Goal) | Environment GIF | Plots |
---|---|---|---|---|
2013 | Playing Atari with Deep Reinforcement Learning | ALE/Pong-v5 - You control the right paddle, you compete against the left paddle controlled by the computer. You each try to keep deflecting the ball away from your goal and into your opponent’s goal. | ![]() |
![]() |
2014 | Deep Deterministic Policy Gradient (DDPG) | Pendulum-v1 - The pendulum starts in a random position and the goal is to apply torque on the free end to swing it into an upright position, with its center of gravity right above the fixed point. | ![]() |
![]() |
2017 | Proximal Policy Optimization (PPO) -- Discrete Action Space | LunarLander-v3: This environment is a classic rocket trajectory optimization problem. According to Pontryagin’s maximum principle, it is optimal to fire the engine at full throttle or turn it off | ![]() |
![]() |
2017 | Proximal Policy Optimization (PPO) -- Continuous Action Space | HalfCheetah-v5: The goal is to apply torque to the joints to make the cheetah run forward (right) as fast as possible, with a positive reward based on the distance moved forward and a negative reward for moving backward. | ![]() |
![]() ![]() |
2018 | Soft Actor-Critic (SAC) | InvertedDoublePendulum-v5: The cart can be pushed left or right, and the goal is to balance the second pole on top of the first pole, which is in turn on top of the cart, by applying continuous forces to the cart. | ![]() |
![]() |