Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved on the documentation of explorers #83

Merged
merged 2 commits into from Aug 5, 2012
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
24 changes: 23 additions & 1 deletion pybrain/rl/explorers/explorer.py
Expand Up @@ -8,6 +8,28 @@ class Explorer(Module):
""" An Explorer object is used in Agents, receives the current state
and action (from the controller Module) and returns an explorative
action that is executed instead the given action.

Continous explorer will produce continous action states, discrete
once discrete actions accordingly.

Explorer action episodic?
=============================== ========= =========
NormalExplorer continous no
StateDependentExplorer continous yes
BoltzmannExplorer discrete no
EpsilonGreedyExplorer discrete no
DiscreteStateDependentExplorer discrete yes


Explorer has to be added to the learner before adding the learner
to the LearningAgent.

For Example::

controller = ActionValueNetwork(2, 100)
learner = SARSA()
learner.explorer = NormalExplorer(1, 0.1)
self.learning_agent = LearningAgent(controller, learner)
"""

def activate(self, state, action):
Expand All @@ -20,4 +42,4 @@ def activate(self, state, action):

def newEpisode(self):
""" Inform the explorer about the start of a new episode. """
pass
pass