Skip to content

noanabeshima/mesa-misalignment-environment

Repository files navigation

Mesa Misalignment Environment

An environment for producing mesa misalignment, originally proposed by Matthew Barnett here. For an introduction to the concept of mesa-misalignment, see here and/or here. See also here for a previous implementation of this and similar environments and more information about objective robustness.

Basically the task is to pick up keys to unlock chests, but the RL agent learns to pathfind to keys instead while mostly ignoring chests.

Agent in Train Environment Agent in Test Environment
Agent in Train Environment Agent in Test Environment

Environment

The environment is a maze/gridworld. Every time the player lands on a key, it is automatically picked up. More than one key can be stored at a time. If the player steps on a chest and has at least one key, the chest is open, a key is lost, and the player gets +1 reward.

There are two variants of this environment, the 'train' environment and the 'test' environment.

In the training environment there are many chests and few keys, so once a player picks up a key they'll most often step on a chest tile without needing to pathfind to one. This means that a reinforcement learning agent doesn't need to learn to go towards chests in order to succeed in the training environment, it only needs to learn the proxy task of pathfinding to keys.

Training and Evaluation

I've used a stable baselines implementation of PPO to train a policy in the test environment. I then check performance in the test and train environments.

To see whether or not the policy is 'trying' to pick up chests, I put hidden chests in the environment that are distributed in the same way as the visible chests, except that they are hidden from the policy. They can overlap with visual chests, but not with keys. To get the human data, I tried to follow the given objectives myself.

The resultant data is here:

And here is a link to the corresponding spreadsheet.


Installation

To use this repository, first install conda. Then run

conda env -n mesa-misalignment-env python=3.7

conda activate mesa-misalignment-env

pip install -r cpu-requirements.txt

OR

pip install -r gpu-requirements.txt

if you're using a GPU.

Use python <file>.py --help for information on how to play the environment yourself (play.py), run a policy for some number of episodes to collect data (run_policy.py), or replicate the training process (train_policy.py).


More samples of the learned policy and a random policy can be found in mesa-misalignment-environment/policy_gifs.

Thank you to 0x72 for almost all of the art! It was downloaded from here.

About

A simple RL environment for producing mesa misalignment.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages