Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLLib] ParallelPettingZooEnv wrapper fails rllib.utils.check_env() #39453

Closed
chrisyeh96 opened this issue Sep 8, 2023 · 1 comment · Fixed by #39459
Closed

[RLLib] ParallelPettingZooEnv wrapper fails rllib.utils.check_env() #39453

chrisyeh96 opened this issue Sep 8, 2023 · 1 comment · Fixed by #39459
Assignees
Labels
bug Something that is supposed to be working; but isn't P1.5 Issues that will be fixed in a couple releases. It will be bumped once all P1s are cleared rllib RLlib related issues rllib-env rllib env related issues rllib-multi-agent An RLlib multi-agent related problem.

Comments

@chrisyeh96
Copy link

What happened + What you expected to happen

I would expect that using RLLib's own ParallelPettingZooEnv wrapper around a PettingZoo ParallelEnv would pass RLLib's own check_env() test. However, this is not the case. See the simple reproduction script below. The error output is copied below.

Error output
2023-09-08 08:35:40,708	WARNING multi_agent_env.py:274 -- observation_space_sample() of <ParallelPettingZooEnv instance> has not been implemented. You can either implement it yourself or bring the observation space into the preferred format of a mapping from agent ids to their individual observation spaces. 
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/ray/rllib/utils/pre_checks/env.py](https://localhost:8080/#) in check_env(env, config)
     80         if isinstance(env, MultiAgentEnv):
---> 81             check_multiagent_environments(env)
     82         elif isinstance(env, VectorEnv):

3 frames
ValueError: The element returned by reset() has agent_ids that are not the names of the agents in the env. 
Agent_ids in this MultiAgentDict: ['player_0', 'player_1']
Agent_ids in this env:[]. You likely need to add the private attribute `_agent_ids` to your env, which is a set containing the ids of agents supported by your env.

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/ray/rllib/utils/pre_checks/env.py](https://localhost:8080/#) in check_env(env, config)
     94     except Exception:
     95         actual_error = traceback.format_exc()
---> 96         raise ValueError(
     97             f"{actual_error}\n"
     98             "The above error has been found in your environment! "

ValueError: Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/ray/rllib/utils/pre_checks/env.py", line 81, in check_env
    check_multiagent_environments(env)
  File "/usr/local/lib/python3.10/dist-packages/ray/rllib/utils/pre_checks/env.py", line 320, in check_multiagent_environments
    _check_if_element_multi_agent_dict(env, reset_obs, "reset()")
  File "/usr/local/lib/python3.10/dist-packages/ray/rllib/utils/pre_checks/env.py", line 785, in _check_if_element_multi_agent_dict
    raise ValueError(error)
ValueError: The element returned by reset() has agent_ids that are not the names of the agents in the env. 
Agent_ids in this MultiAgentDict: ['player_0', 'player_1']
Agent_ids in this env:[]. You likely need to add the private attribute `_agent_ids` to your env, which is a set containing the ids of agents supported by your env.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling `config.environment(disable_env_checking=True)`. You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).

Root Issue

The root issue for this error is that RLLib's ParallelPettingZooEnv wrapper does not adopt the so-called "preferred format" in RLLib's own MultiAgentEnv API

self._obs_space_in_preferred_format = False
self._action_space_in_preferred_format = False
# Collect the individual agents' spaces (they should all be the same):
first_obs_space = self.env.observation_space(self.env.agents[0])
first_action_space = self.env.action_space(self.env.agents[0])
for agent in self.env.agents:
if self.env.observation_space(agent) != first_obs_space:
raise ValueError(
"Observation spaces for all agents must be identical. Perhaps "
"SuperSuit's pad_observations wrapper can help (useage: "
"`supersuit.aec_wrappers.pad_observations(env)`"
)
if self.env.action_space(agent) != first_action_space:
raise ValueError(
"Action spaces for all agents must be identical. Perhaps "
"SuperSuit's pad_action_space wrapper can help (usage: "
"`supersuit.aec_wrappers.pad_action_space(env)`"
)
# Convert from gym to gymnasium, if necessary.
self.observation_space = convert_old_gym_space_to_gymnasium_space(
first_obs_space
)
self.action_space = convert_old_gym_space_to_gymnasium_space(first_action_space)

Recommendations

  1. Update the ParallelPettingZooEnv wrapper to use the preferred format.

  2. Add a unit test in test_pettingzoo_env.py to ensure that a wrapped PettingZoo passes RLLib's own check_env() function.

Why this is important

As a creator of various multi-agent RL environments following the PettingZoo API, I would love to be able to use RLLib's check_env() function as a unit test / sanity check for my own environment to ensure that users can train RLLib algorithms on my environment. Ideally, I'd like the following to work:

Implementing a new multi-agent RL environment:

from pettingzoo.utils.env import ParallelEnv

class MyEnv(ParallelEnv):
    ...

Unit test

import unittest

from ray.rllib.env.wrappers.pettingzoo_env import ParallelPettingZooEnv
import ray.rllib.utils

class TestMyEnv(unittest.TestCase):
    def test_rllib_check_env(self):
        env = MyEnv()
        rllib_env = ParallelPettingZooEnv(env)
        ray.rllib.utils.check_env(rllib_env)

Versions / Dependencies

ray[rllib]==2.6.3
pettingzoo==1.22.3 (this is the latest version of pettingzoo that ray[rllib]==2.6.3 is compatible with)

Reproduction script

Run the following in any Jupyter notebook or Google Colab notebook:

!pip install "ray[rllib]==2.6.3" pettingzoo[classic]

from pettingzoo.classic import rps_v2
from ray.rllib.env.wrappers.pettingzoo_env import ParallelPettingZooEnv
import ray.rllib.utils

env = rps_v2.parallel_env(render_mode="human")
rllib_env = ParallelPettingZooEnv(env)
ray.rllib.utils.check_env(rllib_env)

Issue Severity

None

@chrisyeh96 chrisyeh96 added bug Something that is supposed to be working; but isn't triage Needs triage (eg: priority, bug/not-bug, and owning component) labels Sep 8, 2023
@sven1977 sven1977 added rllib RLlib related issues rllib-multi-agent An RLlib multi-agent related problem. rllib-env rllib env related issues P1.5 Issues that will be fixed in a couple releases. It will be bumped once all P1s are cleared and removed triage Needs triage (eg: priority, bug/not-bug, and owning component) labels Sep 8, 2023
@sven1977 sven1977 self-assigned this Sep 8, 2023
@sven1977
Copy link
Contributor

sven1977 commented Sep 8, 2023

PR in review. Thanks for raising this issue @chrisyeh96 ! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something that is supposed to be working; but isn't P1.5 Issues that will be fixed in a couple releases. It will be bumped once all P1s are cleared rllib RLlib related issues rllib-env rllib env related issues rllib-multi-agent An RLlib multi-agent related problem.
Projects
None yet
2 participants