Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnityActionException: behavior 3DBall?team=1 needs a continuous input of dimension (0, 2) #5204

Closed
nagybalint25 opened this issue Mar 31, 2021 · 4 comments
Assignees
Labels
bug Issue describes a potential bug in ml-agents.

Comments

@nagybalint25
Copy link

Running a version of 3DBall, where I've deleted all agents except for one. Whenever I issue an action using Gym (with the wrapper), for example taking a random action using

env.action_space.sample()

I get an error:

The behavior 3DBall?team=1 needs a continuous input of dimension (0, 2) for (<number of agents>, <action size>) but received input of dimension (1, 2)

There clearly is an agent in my executable, and mlagents-learn works fine too.

My code is as follows:


from mlagents_envs.environment import UnityEnvironment
from gym_unity.envs import UnityToGymWrapper
gym.logger.set_level(40)

def main():
    unity_env = UnityEnvironment(file_name="3dballv2")
    env = UnityToGymWrapper(unity_env,  uint8_visual=True,allow_multiple_obs=True)
    env.reset()
    for _ in range(1000):
        env.render()
        env.step(env.action_space.sample()) #random action
    env.close()    
    


if __name__ == '__main__':
    main()

Environment:

  • Unity Version: Unity 2020.1f1
  • OS + version: Windows 10
  • ML-Agents version: Release 15
  • Torch version: 1.7.1
  • Environment: 3DBalls
@nagybalint25 nagybalint25 added the bug Issue describes a potential bug in ml-agents. label Mar 31, 2021
@vincentpierre
Copy link
Contributor

Hi @nagybalint25
I was able to reproduce the issue. The problem is that your agent is done, do the environment must reset :
The following code should do the trick :

from mlagents_envs.environment import UnityEnvironment
from gym_unity.envs import UnityToGymWrapper
gym.logger.set_level(40)

def main():
    unity_env = UnityEnvironment(file_name="3dballv2")
    env = UnityToGymWrapper(unity_env,  uint8_visual=True,allow_multiple_obs=True)
    env.reset()
    for _ in range(1000):
        env.render()
        o,r,d,_ = env.step(env.action_space.sample()) #random action
        if d:
            env.reset()
    env.close()    
    


if __name__ == '__main__':
    main()

The error message is wrong and I will work to resolve this. Thank you for raising this issue.

@nagybalint25
Copy link
Author

nagybalint25 commented Mar 31, 2021 via email

@dongruoping
Copy link
Contributor

Thanks @vincentpierre !

Since the fix has been merged I'll close it.

@github-actions
Copy link

github-actions bot commented May 3, 2021

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Issue describes a potential bug in ml-agents.
Projects
None yet
Development

No branches or pull requests

3 participants