Skip to content

Shimmy 1.0.0: Shimmy Becomes Mature

Compare
Choose a tag to compare
@elliottower elliottower released this 25 Apr 19:28
· 26 commits to main since this release
f1dfc55

Shimmy 1.0.0 Release Notes:

We are excited to announce the mature release of Shimmy, an API compatibility tool for converting external RL environments to the Gymnasium and PettingZoo APIs. This allows users to access a wide range of single and multi-agent environments, all under a single standard API.

Within Reinforcement learning (RL), a number of API's are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult. Shimmy addresses this issue by integrating a range of APIs into the Farama ecosystem. This is part of the Farama Foundation's greater goal of creating a unified and user-friendly ecosystem for open-source reinforcement learning software, for both research and industry.

We plan to maintain Shimmy for the long term, and are welcome to new contributions or suggestions. Future plans include general DM Env and DM lab2d support, and additional environments such as ai-safety-gridworlds.

Shimmy's documentation can be found at shimmy.farama.org. This includes an overview of each environment with screenshots, installation instructions, full usage scripts, and API information.

Environments

Single-agent (Gymnasium wrappers):

Multi-agent (PettingZoo wrappers):

Single-agent environments can be easily loaded using Gymnasium’s registry and make() function as follows:

import gymnasium as gym
env = gym.make("dm_control/acrobot-swingup_sparse-v0", render_mode="human")

Multi-agent environments can be loaded with PettingZoo as follows:

from shimmy import MeltingPotCompatibilityV0
env = MeltingPotCompatibilityV0(substrate_name="prisoners_dilemma_in_the_matrix__arena", render_mode="human")

Breaking Changes

OpenspielCompatibilityV0 has been renamed to OpenSpielCompatibilityV0 (correct spelling of OpenSpiel)

Since the v0.21.0 release, the setup.py has been updated to include separate install options for gym V21 and V26:

  • Instead of pip install shimmy[gym], you must select either: pip install shimmy[gym-v21] or pip install shimmy[gym-v26]

New Features and Improvements

This release adds support for three additional environments:

This release also expands automated testing to cover each environment (#51) and adds pickling tests (#53), ensuring that each environment can be serialized/deserialized via pickle.

Dockerfiles have been expanded to cover each environment (#52), located in /bin/. These are primarily used for automated testing, but can also be used locally (#65), allowing environments to be used on any platform (see Getting Started: Docker for more information).

The DeepMind Lab and Melting Pot environments are not available in distributed releases (via PyPi or elsewhere), and thus cannot be easily installed via pip install. For these environments, we provide full installation scripts for both MacOS and Linux.

Bug Fixes and Documentation Updates

This release includes a major documentation overhaul, updating the project to comply with Farama Project Standards (#66). This includes a Getting Started page, with installation information, and a Basic Usage page, with reference information on using both single- and multi-agent environments.

Documentation has also been standardized to include descriptions of each environment, with links to documentation and related libraries, and images of the environment for reference (#43, #47 #49).

Full example usage scripts are now provided for each environment, allowing users to easily load and interact with an environment without prior knowledge.

Example: run a dm-control environment:

observation, info = env.reset(seed=42)
for _ in range(1000):
   action = env.action_space.sample()  # this is where you would insert your policy
   observation, reward, terminated, truncated, info = env.step(action)

   if terminated or truncated:
      observation, info = env.reset()
env.close()

Full Changelog: v0.2.1...v1.0.0