Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ray.rllib.agents.callbacks has been deprecated. Use ray.rllib.algorithms.callbacks instead. #2

Open
mlDaddy opened this issue Apr 3, 2023 · 1 comment

Comments

@mlDaddy
Copy link

mlDaddy commented Apr 3, 2023

(https://user-images.githubusercontent.com/118107900/229572253-90f571ad-cea9-4949-80e7-06e745f1f3bc.png)
Hi I have install all the libraries following the readme file. (Ray==2.3.1). But Following error appears. Can you please share which version of ray did u use. Or guide me what I am doing wrong here.

@mlDaddy
Copy link
Author

mlDaddy commented Apr 3, 2023

When I make the above change in utils.rllib.py file, following errors pop up next time I try to run the pre-trained model.

(savera) adnan@adnan:~/ws/savera/LearnToMoveUR3$ python main.py --env-id reach --load-from pretrained_models/reach --test
2023-04-03 21:40:36,245 WARNING deprecation.py:51 -- DeprecationWarning: rllib.agents::Trainer has been deprecated. Use rllib.algorithms::Algorithm instead. This will raise an error in the future!
2023-04-03 21:40:39,445 INFO worker.py:1553 -- Started a local Ray instance.
2023-04-03 21:40:42,168 WARNING deprecation.py:51 -- DeprecationWarning: algo = Algorithm(env='reach', ...) has been deprecated. Use algo = AlgorithmConfig().environment('reach').build() instead. This will raise an error in the future!
2023-04-03 21:40:42,203 INFO algorithm.py:507 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
(RolloutWorker pid=18990) 2023-04-03 21:40:49,307 WARNING deprecation.py:51 -- DeprecationWarning: rllib.agents::Trainer has been deprecated. Use rllib.algorithms::Algorithm instead. This will raise an error in the future!
(RolloutWorker pid=18989) 2023-04-03 21:40:49,657 WARNING deprecation.py:51 -- DeprecationWarning: rllib.agents::Trainer has been deprecated. Use rllib.algorithms::Algorithm instead. This will raise an error in the future!
(RolloutWorker pid=18988) 2023-04-03 21:40:49,686 WARNING deprecation.py:51 -- DeprecationWarning: rllib.agents::Trainer has been deprecated. Use rllib.algorithms::Algorithm instead. This will raise an error in the future!
(RolloutWorker pid=18987) 2023-04-03 21:40:49,879 WARNING deprecation.py:51 -- DeprecationWarning: rllib.agents::Trainer has been deprecated. Use rllib.algorithms::Algorithm instead. This will raise an error in the future!
(RolloutWorker pid=18989) 2023-04-03 21:40:53,140 ERROR worker.py:772 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18989, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f1059d50a10>)
(RolloutWorker pid=18989) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18989) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18989) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18989) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) Learn more about the most important changes here:
(RolloutWorker pid=18989) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) In order to fix this problem, do the following:
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18989) 2) Change all your import statements in your code from
(RolloutWorker pid=18989) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18989) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18989) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18989) EnvCompatibility wrapper class.
(RolloutWorker pid=18989) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18989) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18989) seed=None, options=None)'
(RolloutWorker pid=18989) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18989) method.
(RolloutWorker pid=18989) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18989) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18989) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18989) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18989) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18989) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18989) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18989) - Change your reset() method to have the call signature
(RolloutWorker pid=18989) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18989) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18989) reset() method.
(RolloutWorker pid=18989) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18989) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18989) setting).
(RolloutWorker pid=18989) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18989) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18989) per-agent dict).
(RolloutWorker pid=18989) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18989) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18989) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18989)
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) During handling of the above exception, another exception occurred:
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) ray::RolloutWorker.init() (pid=18989, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f1059d50a10>)
(RolloutWorker pid=18989) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
(RolloutWorker pid=18989) check_env(self.env)
(RolloutWorker pid=18989) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
(RolloutWorker pid=18989) f"{actual_error}\n"
(RolloutWorker pid=18989) ValueError: Traceback (most recent call last):
(RolloutWorker pid=18989) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
(RolloutWorker pid=18989) check_gym_environments(env)
(RolloutWorker pid=18989) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18989) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18989) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18989) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) Learn more about the most important changes here:
(RolloutWorker pid=18989) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) In order to fix this problem, do the following:
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18989) 2) Change all your import statements in your code from
(RolloutWorker pid=18989) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18989) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18989) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18989) EnvCompatibility wrapper class.
(RolloutWorker pid=18989) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18989) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18989) seed=None, options=None)'
(RolloutWorker pid=18989) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18989) method.
(RolloutWorker pid=18989) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18989) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18989) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18989) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18989) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18989) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18989) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18989) - Change your reset() method to have the call signature
(RolloutWorker pid=18989) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18989) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18989) reset() method.
(RolloutWorker pid=18989) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18989) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18989) setting).
(RolloutWorker pid=18989) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18989) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18989) per-agent dict).
(RolloutWorker pid=18989) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18989) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18989) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18989)
(RolloutWorker pid=18989)
(RolloutWorker pid=18989) The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
(RolloutWorker pid=18990) 2023-04-03 21:40:53,096 ERROR worker.py:772 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18990, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f757df7e490>)
(RolloutWorker pid=18990) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18990) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18990) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18990) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) Learn more about the most important changes here:
(RolloutWorker pid=18990) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) In order to fix this problem, do the following:
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18990) 2) Change all your import statements in your code from
(RolloutWorker pid=18990) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18990) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18990) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18990) EnvCompatibility wrapper class.
(RolloutWorker pid=18990) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18990) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18990) seed=None, options=None)'
(RolloutWorker pid=18990) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18990) method.
(RolloutWorker pid=18990) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18990) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18990) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18990) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18990) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18990) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18990) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18990) - Change your reset() method to have the call signature
(RolloutWorker pid=18990) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18990) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18990) reset() method.
(RolloutWorker pid=18990) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18990) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18990) setting).
(RolloutWorker pid=18990) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18990) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18990) per-agent dict).
(RolloutWorker pid=18990) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18990) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18990) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18990)
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) During handling of the above exception, another exception occurred:
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) ray::RolloutWorker.init() (pid=18990, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f757df7e490>)
(RolloutWorker pid=18990) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
(RolloutWorker pid=18990) check_env(self.env)
(RolloutWorker pid=18990) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
(RolloutWorker pid=18990) f"{actual_error}\n"
(RolloutWorker pid=18990) ValueError: Traceback (most recent call last):
(RolloutWorker pid=18990) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
(RolloutWorker pid=18990) check_gym_environments(env)
(RolloutWorker pid=18990) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18990) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18990) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18990) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) Learn more about the most important changes here:
(RolloutWorker pid=18990) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) In order to fix this problem, do the following:
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18990) 2) Change all your import statements in your code from
(RolloutWorker pid=18990) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18990) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18990) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18990) EnvCompatibility wrapper class.
(RolloutWorker pid=18990) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18990) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18990) seed=None, options=None)'
(RolloutWorker pid=18990) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18990) method.
(RolloutWorker pid=18990) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18990) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18990) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18990) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18990) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18990) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18990) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18990) - Change your reset() method to have the call signature
(RolloutWorker pid=18990) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18990) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18990) reset() method.
(RolloutWorker pid=18990) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18990) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18990) setting).
(RolloutWorker pid=18990) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18990) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18990) per-agent dict).
(RolloutWorker pid=18990) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18990) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18990) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18990)
(RolloutWorker pid=18990)
(RolloutWorker pid=18990) The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
(RolloutWorker pid=18988) 2023-04-03 21:40:53,209 ERROR worker.py:772 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18988, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f94ed063990>)
(RolloutWorker pid=18988) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18988) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18988) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18988) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) Learn more about the most important changes here:
(RolloutWorker pid=18988) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) In order to fix this problem, do the following:
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18988) 2) Change all your import statements in your code from
(RolloutWorker pid=18988) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18988) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18988) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18988) EnvCompatibility wrapper class.
(RolloutWorker pid=18988) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18988) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18988) seed=None, options=None)'
(RolloutWorker pid=18988) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18988) method.
(RolloutWorker pid=18988) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18988) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18988) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18988) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18988) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18988) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18988) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18988) - Change your reset() method to have the call signature
(RolloutWorker pid=18988) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18988) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18988) reset() method.
(RolloutWorker pid=18988) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18988) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18988) setting).
(RolloutWorker pid=18988) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18988) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18988) per-agent dict).
(RolloutWorker pid=18988) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18988) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18988) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18988)
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) During handling of the above exception, another exception occurred:
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) ray::RolloutWorker.init() (pid=18988, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f94ed063990>)
(RolloutWorker pid=18988) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
(RolloutWorker pid=18988) check_env(self.env)
(RolloutWorker pid=18988) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
(RolloutWorker pid=18988) f"{actual_error}\n"
(RolloutWorker pid=18988) ValueError: Traceback (most recent call last):
(RolloutWorker pid=18988) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
(RolloutWorker pid=18988) check_gym_environments(env)
(RolloutWorker pid=18988) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18988) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18988) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18988) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) Learn more about the most important changes here:
(RolloutWorker pid=18988) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) In order to fix this problem, do the following:
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18988) 2) Change all your import statements in your code from
(RolloutWorker pid=18988) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18988) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18988) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18988) EnvCompatibility wrapper class.
(RolloutWorker pid=18988) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18988) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18988) seed=None, options=None)'
(RolloutWorker pid=18988) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18988) method.
(RolloutWorker pid=18988) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18988) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18988) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18988) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18988) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18988) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18988) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18988) - Change your reset() method to have the call signature
(RolloutWorker pid=18988) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18988) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18988) reset() method.
(RolloutWorker pid=18988) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18988) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18988) setting).
(RolloutWorker pid=18988) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18988) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18988) per-agent dict).
(RolloutWorker pid=18988) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18988) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18988) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18988)
(RolloutWorker pid=18988)
(RolloutWorker pid=18988) The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
2023-04-03 21:40:53,464 ERROR actor_manager.py:497 -- Ray error, taking actor 1 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18987, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fd497ec3150>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=18987, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fd497ec3150>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
check_env(self.env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
f"{actual_error}\n"
ValueError: Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
check_gym_environments(env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
2023-04-03 21:40:53,467 ERROR actor_manager.py:497 -- Ray error, taking actor 2 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18988, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f94ed063990>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=18988, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f94ed063990>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
check_env(self.env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
f"{actual_error}\n"
ValueError: Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
check_gym_environments(env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
2023-04-03 21:40:53,468 ERROR actor_manager.py:497 -- Ray error, taking actor 3 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18989, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f1059d50a10>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=18989, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f1059d50a10>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
check_env(self.env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
f"{actual_error}\n"
ValueError: Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
check_gym_environments(env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
2023-04-03 21:40:53,469 ERROR actor_manager.py:497 -- Ray error, taking actor 4 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18990, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f757df7e490>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=18990, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f757df7e490>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
check_env(self.env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
f"{actual_error}\n"
ValueError: Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
check_gym_environments(env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 174, in init
local_worker=local_worker,
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 242, in _setup
validate=config.validate_workers_after_construction,
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 614, in add_workers
raise result.get()
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/actor_manager.py", line 477, in __fetch_result
result = ray.get(r)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/_private/worker.py", line 2382, in get
raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18987, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fd497ec3150>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

During handling of the above exception, another exception occurred:

ray::RolloutWorker.init() (pid=18987, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fd497ec3150>)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
check_env(self.env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
f"{actual_error}\n"
ValueError: Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
check_gym_environments(env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 161, in
run(args)
File "main.py", line 115, in run
env=env_id, config=rllib_configs, logger_creator=logger_creator
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/algorithms/algorithm.py", line 448, in init
**kwargs,
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/tune/trainable/trainable.py", line 169, in init
self.setup(copy.deepcopy(self.config))
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/algorithms/algorithm.py", line 578, in setup
logdir=self.logdir,
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 192, in init
raise e.args[0].args[2]
ValueError: Traceback (most recent call last):
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
check_gym_environments(env)
File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
ValueError: Your environment () does not abide to the new gymnasium-style API!
From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.

Learn more about the most important changes here:
https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium

In order to fix this problem, do the following:

  1. Run pip install gymnasium on your command line.
  2. Change all your import statements in your code from
    import gym -> import gymnasium as gym OR
    from gym.space import Discrete -> from gymnasium.spaces import Discrete

For your custom (single agent) gym.Env classes:
3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import EnvCompatibility wrapper class.
3.2) Alternatively to 3.1:

  • Change your reset() method to have the call signature 'def reset(self, *,
    seed=None, options=None)'
  • Return an additional info dict (empty dict should be fine) from your reset()
    method.
  • Return an additional truncated flag from your step() method (between done and
    info). This flag should indicate, whether the episode was terminated prematurely
    due to some time constraint or other kind of horizon setting.

For your custom RLlib MultiAgentEnv classes:
4.1) Either wrap your old MultiAgentEnv via the provided
from ray.rllib.env.wrappers.multi_agent_env_compatibility import MultiAgentEnvCompatibility wrapper class.
4.2) Alternatively to 4.1:

  • Change your reset() method to have the call signature
    'def reset(self, *, seed=None, options=None)'
  • Return an additional per-agent info dict (empty dict should be fine) from your
    reset() method.
  • Rename dones into terminateds and only set this to True, if the episode is really
    done (as opposed to has been terminated prematurely due to some horizon/time-limit
    setting).
  • Return an additional truncateds per-agent dictionary flag from your step()
    method, including the __all__ key (100% analogous to your dones/terminateds
    per-agent dict).
    Return this new truncateds dict between dones/terminateds and infos. This
    flag should indicate, whether the episode (for some agent or all agents) was
    terminated prematurely due to some time constraint or other kind of horizon setting.

The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).
(RolloutWorker pid=18987) 2023-04-03 21:40:53,456 ERROR worker.py:772 -- Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=18987, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fd497ec3150>)
(RolloutWorker pid=18987) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18987) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18987) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18987) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) Learn more about the most important changes here:
(RolloutWorker pid=18987) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) In order to fix this problem, do the following:
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18987) 2) Change all your import statements in your code from
(RolloutWorker pid=18987) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18987) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18987) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18987) EnvCompatibility wrapper class.
(RolloutWorker pid=18987) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18987) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18987) seed=None, options=None)'
(RolloutWorker pid=18987) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18987) method.
(RolloutWorker pid=18987) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18987) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18987) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18987) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18987) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18987) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18987) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18987) - Change your reset() method to have the call signature
(RolloutWorker pid=18987) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18987) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18987) reset() method.
(RolloutWorker pid=18987) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18987) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18987) setting).
(RolloutWorker pid=18987) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18987) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18987) per-agent dict).
(RolloutWorker pid=18987) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18987) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18987) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18987)
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) During handling of the above exception, another exception occurred:
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) ray::RolloutWorker.init() (pid=18987, ip=10.7.19.227, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7fd497ec3150>)
(RolloutWorker pid=18987) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 614, in init
(RolloutWorker pid=18987) check_env(self.env)
(RolloutWorker pid=18987) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 94, in check_env
(RolloutWorker pid=18987) f"{actual_error}\n"
(RolloutWorker pid=18987) ValueError: Traceback (most recent call last):
(RolloutWorker pid=18987) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 82, in check_env
(RolloutWorker pid=18987) check_gym_environments(env)
(RolloutWorker pid=18987) File "/home/adnan/anaconda3/envs/savera/lib/python3.7/site-packages/ray/rllib/utils/pre_checks/env.py", line 139, in check_gym_environments
(RolloutWorker pid=18987) raise ValueError(ERR_MSG_OLD_GYM_API.format(env, ""))
(RolloutWorker pid=18987) ValueError: Your environment () does not abide to the new gymnasium-style API!
(RolloutWorker pid=18987) From Ray 2.3 on, RLlib only supports the new (gym>=0.26 or gymnasium) Env APIs.
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) Learn more about the most important changes here:
(RolloutWorker pid=18987) https://github.com/openai/gym and here: https://github.com/Farama-Foundation/Gymnasium
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) In order to fix this problem, do the following:
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) 1) Run pip install gymnasium on your command line.
(RolloutWorker pid=18987) 2) Change all your import statements in your code from
(RolloutWorker pid=18987) import gym -> import gymnasium as gym OR
(RolloutWorker pid=18987) from gym.space import Discrete -> from gymnasium.spaces import Discrete
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) For your custom (single agent) gym.Env classes:
(RolloutWorker pid=18987) 3.1) Either wrap your old Env class via the provided from gymnasium.wrappers import (RolloutWorker pid=18987) EnvCompatibility wrapper class.
(RolloutWorker pid=18987) 3.2) Alternatively to 3.1:
(RolloutWorker pid=18987) - Change your reset() method to have the call signature 'def reset(self, *,
(RolloutWorker pid=18987) seed=None, options=None)'
(RolloutWorker pid=18987) - Return an additional info dict (empty dict should be fine) from your reset()
(RolloutWorker pid=18987) method.
(RolloutWorker pid=18987) - Return an additional truncated flag from your step() method (between done and
(RolloutWorker pid=18987) info). This flag should indicate, whether the episode was terminated prematurely
(RolloutWorker pid=18987) due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) For your custom RLlib MultiAgentEnv classes:
(RolloutWorker pid=18987) 4.1) Either wrap your old MultiAgentEnv via the provided
(RolloutWorker pid=18987) from ray.rllib.env.wrappers.multi_agent_env_compatibility import (RolloutWorker pid=18987) MultiAgentEnvCompatibility wrapper class.
(RolloutWorker pid=18987) 4.2) Alternatively to 4.1:
(RolloutWorker pid=18987) - Change your reset() method to have the call signature
(RolloutWorker pid=18987) 'def reset(self, *, seed=None, options=None)'
(RolloutWorker pid=18987) - Return an additional per-agent info dict (empty dict should be fine) from your
(RolloutWorker pid=18987) reset() method.
(RolloutWorker pid=18987) - Rename dones into terminateds and only set this to True, if the episode is really
(RolloutWorker pid=18987) done (as opposed to has been terminated prematurely due to some horizon/time-limit
(RolloutWorker pid=18987) setting).
(RolloutWorker pid=18987) - Return an additional truncateds per-agent dictionary flag from your step()
(RolloutWorker pid=18987) method, including the __all__ key (100% analogous to your dones/terminateds
(RolloutWorker pid=18987) per-agent dict).
(RolloutWorker pid=18987) Return this new truncateds dict between dones/terminateds and infos. This
(RolloutWorker pid=18987) flag should indicate, whether the episode (for some agent or all agents) was
(RolloutWorker pid=18987) terminated prematurely due to some time constraint or other kind of horizon setting.
(RolloutWorker pid=18987)
(RolloutWorker pid=18987)
(RolloutWorker pid=18987) The above error has been found in your environment! We've added a module for checking your custom environments. It may cause your experiment to fail if your environment is not set up correctly. You can disable this behavior via calling config.environment(disable_env_checking=True). You can run the environment checking module standalone by calling ray.rllib.utils.check_env([your env]).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant