Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/Agents-Editor-Interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ values (in _Discrete_ action space).
* `Action Descriptions` - A list of strings used to name the available actions for the Brain.
* `State Space Type` - Corresponds to whether state vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
* `Action Space Type` - Corresponds to whether action vector contains a single integer (Discrete) or a series of real-valued floats (Continuous).
* `Type of Brain` - Describes how Brain will decide actions.
* `Type of Brain` - Describes how the Brain will decide actions.
* `External` - Actions are decided using Python API.
* `Internal` - Actions are decided using internal TensorflowSharp model.
* `Player` - Actions are decided using Player input mappings.
Expand Down
2 changes: 1 addition & 1 deletion docs/Organizing-the-Scene.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The Academy is responsible for:
* Coordinating the Brains which must be set as children of the Academy.

#### Brains
Each brain corresponds to a specific Decision-making method. This often aligns with a specific neural network model. A Brains is responsible for deciding the action of all the Agents which are linked to it. There can be multiple brains in the same scene and multiple agents can subscribe to the same brain.
Each brain corresponds to a specific Decision-making method. This often aligns with a specific neural network model. The brain is responsible for deciding the action of all the Agents which are linked to it. There can be multiple brains in the same scene and multiple agents can subscribe to the same brain.

#### Agents
Each agent within a scene takes actions according to the decisions provided by it's linked Brain. There can be as many Agents of as many types as you like in the scene. The state size and action size of each agent must match the brain's parameters in order for the Brain to decide actions for it.
2 changes: 1 addition & 1 deletion docs/Unity-Agents-Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

![diagram](../images/agents_diagram.png)

A visual depiction of how an Learning Environment might be configured within ML-Agents.
A visual depiction of how a Learning Environment might be configured within ML-Agents.

The three main kinds of objects within any Agents Learning Environment are:

Expand Down
6 changes: 3 additions & 3 deletions docs/best-practices.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Environment Design Best Practices

## General
* It is often helpful to being with the simplest version of the problem, to ensure the agent can learn it. From there increase
* It is often helpful to start with the simplest version of the problem, to ensure the agent can learn it. From there increase
complexity over time. This can either be done manually, or via Curriculum Learning, where a set of lessons which progressively increase in difficulty are presented to the agent ([learn more here](../docs/curriculum.md)).
* When possible, It is often helpful to ensure that you can complete the task by using a Player Brain to control the agent.
* When possible, it is often helpful to ensure that you can complete the task by using a Player Brain to control the agent.

## Rewards
* The magnitude of any given reward should typically not be greater than 1.0 in order to ensure a more stable learning process.
* Positive rewards are often more helpful to shaping the desired behavior of an agent than negative rewards.
* For locomotion tasks, a small positive reward (+0.1) for forward velocity is typically used.
* If you want the agent the finish a task quickly, it is often helpful to provide a small penalty every step (-0.05) that the agent does not complete the task. In this case completion of the task should also coincide with the end of the episode.
* If you want the agent to finish a task quickly, it is often helpful to provide a small penalty every step (-0.05) that the agent does not complete the task. In this case completion of the task should also coincide with the end of the episode.
* Overly-large negative rewards can cause undesirable behavior where an agent learns to avoid any behavior which might produce the negative reward, even if it is also behavior which can eventually lead to a positive reward.

## States
Expand Down