Skip to content
This repository has been archived by the owner on Mar 8, 2022. It is now read-only.
/ agymc Public archive

An concurrent wrapper for OpenAI Gym library

License

Notifications You must be signed in to change notification settings

rentruewang/agymc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

agymc

gym

For reinforcement learning and concurrency lovers out there ...

TL;DR

  • Mostly the same API as gym, except now multiple environments are run.
  • Envs are run concurrently, which means speedup with time consuming operations such as backprop, render etc..

Intro

This is a concurrent wrapper for OpenAI Gym library that runs multiple environments concurrently, which means running faster in training* without consuming more CPU power.

What exactly is concurrency ?

Maybe you have heard of parallel computing ? When we say we execute things in parallel, we run the program on multiple processors, which offers significant speedup. Concurrency computing has a broader meaning, though. The definition of a concurrent program, is that it is designed not to execute sequentially, and will one day be executed parallelly**. A concurrent program can run on a sigle processor or multiple processors. These tasks may communicate with each other, but have separate private states hidden from others.

Can concurrency be applied on a single processor ?

Yes, concurrency means splitting the program into smaller subprograms, allowing some parts of code to be executed asynchronously. Some tasks, by nature, takes a lot of time to complete. Downloading a file, for example. Without concurrency, the processor would have to wait for the task to complete before starting to execute the next task. However, with concurrency we could temporarily suspend the current task, and come back later when the task finishes. Without using extra computing power.***

So much for introducing concurrency... now, what is gym ?

OpenAI gym, is a Python library that helps research reinforcement learning. Reinforcement learning is a branch from control theory, and focusing mainly on agents interacting with environments. And OpenAI gym provides numerous environments for people to benchmark their beloved reinforcement learning algorithms. For you agents to train in a gym, they say.

Um, so why do we need agymc, do you say ?

Despite its merits, OpenAI gym has one major drawback. It is designed to run one agent on a processor at a time, only. What if you want to run multiple environments on the same processor at a time? Well, it will run, sequentially. Which means slow if you want to train a robot in batches.

Experiments

Using env.render as our bottlenecking operation, runing 200 environments, our versionagymc completes 50 episodes in 4 minutes, while naive gym version takes around twice as long. This is what the madness looks like:

Screenshot_1

Wow, how to use agymc ?

agymc, which combines the power of Python async API and OpenAI gym, hence the name, designed for users to make minimal changes to their OpenAI gym code. All usages are the same, except now the returns are in batches (lists), and except serveral environments are now run concurrently. Example below!

Sounds nice. How to I get it ?

mit os py

pip3 install agymc

And that's it! Hooray!

Example Usage Code Snippet

import argparse
import asyncio

import agymc

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--num-envs", type=int)
    parser.add_argument("--episodes", type=int)
    parser.add_argument("--render", action="store_true")
    parser.add_argument("--verbose", action="store_true")
    flags = parser.parse_args()

    num_envs = flags.num_envs
    num_episodes = flags.episodes
    render = flags.render
    verbose = flags.verbose

    envs = agymc.make("CartPole-v0", num_envs)
    if verbose:
        import tqdm

        iterable = tqdm.tqdm(range(num_episodes))
    else:
        iterable = range(num_episodes)
    for _ in iterable:
        done = list(False for _ in range(num_envs))
        envs.reset()
        while not all(done):
            if render:
                envs.render()
            action = envs.action_space.sample()
            # using asyncio.sleep to simulate workflow
            # asyncio.sleep blocks the current thread
            # however we wrapped the environment in a rather nice way
            # such that concurrency still applies
            # the result: It won't block.
            # also worth noting that using this "blocking call"
            # runs faster than having this function do nothing
            # I guess its because the the asyncio.sleep method
            # forces the event loop to schedule thing more nicely
            def function(number):
                asyncio.create_task(asyncio.sleep(1))

            _ = envs.parallel(function, [num_envs * [1]])
            (_, _, done, _) = envs.step(action)
    envs.close()

* When doing pure gym operation such as sampling, stepping, this library runs slower since this is a wrapper for gym. However, for actions that takes a while to execute, such as backprop and update, sending data back and forth, or even rendering, concurrency makes the operations execute much faster than a naive gym implementation

** If you would like to learn more about concurrency patterns, this video is really informative.

*** Without using extra computing power, save a tiny overhead called scheduler. Which every computer provides and is hard to profile its efficiency.

About

An concurrent wrapper for OpenAI Gym library

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published