Skip to content

persiyanov/AgentNet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AgentNet

A lightweight library to build and train deep reinforcement learning and custom recurrent networks using Theano+Lasagne Build Status Docs badge Gitter

Press Binder for instant dive-in.

img

What is AgentNet?

agentnet structure

No time to play games? Let machines do this for you!

AgentNet is a deep reinforcement learning framework, which is designed for ease of research and prototyping of Deep Learning models for Markov Decision Processes.

All techno-babble set aside, you can use it to train your pet neural network to play games! [e.g. OpenAI Gym]

We have a full in-and-out support for Lasagne deep learning library, granting you access to all convolutions, maxouts, poolings, dropouts, etc. etc. etc.

AgentNet handles both discrete and continuous control problems and supports hierarchical reinforcement learning [experimental].

List of already implemented reinforcement techniques:

  • Q-learning (or deep Q-learning, since we support arbitrary complexity of network)
  • N-step Q-learning
  • SARSA
  • N-step Advantage Actor-Critic (A2c)
  • N-step Deterministic Policy Gradient (DPG)

As a side-quest, we also provide a boilerplate to custom long-term memory network architectures (see examples).

Installation

Detailed installation guide

Try without installing

Quick install

Full install (with examples)

  1. Clone this repository: git clone https://github.com/yandexdataschool/AgentNet.git && cd AgentNet
  2. Install dependencies: pip install -r requirements.txt
  3. Install library itself: pip install -e .

Docker container

On Windows/OSX install Docker Kitematic, then simply run justheuristic/agentnet container and click on 'web preview'.

On other linux/unix systems:

  1. install Docker,
  2. make sure docker daemon is running (sudo service docker start)
  3. make sure no application is using port 1234 (this is the default port that can be changed)
  4. [sudo] docker run -d -p 1234:8888 justheuristic/agentnet
  5. Access from browser via localhost:1234

Documentation and tutorials

A quick dive-in can be found here:

  • Click Binder
  • classwork.ipynb = your tutorial
  • classwork_solution.ipynb = a fully implemented version with simple CNN for reference

(incomplete) Documentation pages can be found here.

AgentNet also has full embedded documentation, so calling help(some_function_or_object) or pressing shift+tab in IPython yields a description of object/function.

A standard pipeline of AgentNet experiment is shown in following examples:

Advanced examples

If you wish to get acquainted with the current library state, view some of the ./examples

AgentNet is under active construction, so expect things to change. If you wish to join the development, we'd be happy to accept your help.

About

Deep Reinforcement Learning library for humans

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 55.2%
  • Jupyter Notebook 44.8%