reinforce is a library which exports an openai-gym-like typeclass,
MonadEnv, with both an interface to the
gym-http-api, as well as haskell-native environments which provide a substantial speed-up to the http-server interface.
This is an environment-first library, with basic reinforcement learning algorithms being developed on branches in subpackages (see #Development and Milestones).
reinforce is currently an "alpha" release since it still needs some work defining some formal structures around what state-spaces and action-spaces should look like, however haskell's typesystem is expressive enough that this seems to be more of a "nice-to-have".
This repo is in active development and has some beginner-friendly contributions, from porting new gym environments to implementing new algorithms. Because this library is not on hackage, if you would like to see the haddocks, you can find it here.
An example agent
examples/, you can find an agent which showcases some of the functionality of this library.
module Main where import Reinforce.Prelude -- ^ NoImplicitPrelude is on import Environments.CartPole (Environment, runEnvironment_) import Control.MonadEnv (Initial(..), Obs(..)) import qualified Control.MonadEnv as Env (step, reset) import qualified Environments.CartPole as Env (StateCP) -- Comments: -- StateCP - An "observation" or "the state of the agent" - note that State overloaded, so StateCP -- Action - A performable action in the environment. import qualified Reinforce.Spaces.Action as Actions (randomChoice) main :: IO () main = runEnvironment_ gogoRandomAgent where gogoRandomAgent :: Environment () gogoRandomAgent = forM_ [0..maxEpisodes] $ \_ -> Env.reset >>= \case -- this comes from LambdaCase. Sugar for: \a -> case a of ... EmptyEpisode -> pure () Initial obs -> do liftIO . print $ "Initialized episode and am in state " ++ show obs rolloutEpisode obs 0 maxEpisodes :: Int maxEpisodes = 100 -- this is usually the structure of a rollout: rolloutEpisode :: Env.StateCP -> Double -> Environment () rolloutEpisode obs totalRwd = do a <- liftIO Actions.randomChoice Env.step a >>= \case Terminated -> pure () Done r mobs -> liftIO . print $ "Done! final reward: " ++ show (totalRwd+r) ++ ", final state: " ++ show mobs Next r obs' -> do liftIO . print $ "Stepped with " ++ show a ++ " - reward: " ++ show r ++ ", next state: " ++ show obs' rolloutEpisode obs' (totalRwd+r)
You can build and run this with the following commands:
git clone https://github.com/Sentenai/reinforce cd reinforce stack build stack exec random-agent-example
Note that if you want to run a gym environment, you'll have to run the openai/gym-http-api server with the following steps:
git clone https://github.com/openai/gym-http-api cd gym-http-api pip install -r requirements.txt python ./gym_http_server.py
Currently, development has been primarily focused around classic control, so if you want to add any of the Atari environments, this would be an easy contribution!
Reinforce doesn't exist on hackage or stackage (yet), so your best bet is to add this git repo to your stack.yaml file:
packages: - '.' - location: git: firstname.lastname@example.org:Sentenai/reinforce.git commit: 'v0.0.1' extra-dep:true # This is a requirement due to some tight coupling of the gym-http-api - location: git: https://github.com/stites/gym-http-api.git commit: '5b72789' subdirs: - binding-hs extra-dep: true - ...
and add it to your cabal file or package.yaml (recommended) dependencies.
Development and Milestones
If you want to contribute, you're in luck! There are a range of things to do from the beginner haskeller to, even, advanced pythonistas!
While you can check the Github issues, here are some items off the top of my head which could use some immediate attention (and may also need to be filed).
A few quick environment contributions might be the following:
- #1 (easy) - Add an Atari environment to the api (like pong! others might require directly commiting to
- #8 (med) - Port Richard Sutton's Acrobot code to haskell
- #6 (hard) - Break the dependency on the
openai/gym-http-apiserver -- this would speed up performance considerably
- #9 (harder) - Render the haskell CartPole environment with SDL
Some longer-running algorithmic contributions which would take place on the
deep-rl branches might be:
- #10 (easy) - Convert algorithms into agents
- #11 (med) - Add a testable "convergence" criteria
- #12 (med) - Implement some eligibility trace variants to the
- #13 (med) - Add some policy gradient methods to the
- #14 (hard) - Head over to the
deep-rlbranch and convert some of the deep reinforcement learning models into haskell with tensorflow-haskell, and/or backprop
For a longer-term view, feel free to check out Milestones.
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!