Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-Agent Setup for one network #31

Open
JoshuaHames opened this issue Apr 7, 2023 · 2 comments
Open

Multi-Agent Setup for one network #31

JoshuaHames opened this issue Apr 7, 2023 · 2 comments

Comments

@JoshuaHames
Copy link

JoshuaHames commented Apr 7, 2023

Hello,

I've been enjoying the plugin a lot, however I was trying to figure out if it's possible to have a non-MARL multi-agent setup, specifically with PPO.

Buy this I mean having multiple agents collect experience for one neural network, which is a very common technique for networks like PPO. I found a section in the wiki talking about setting up multiple clients in order to train multiple networks, but I am trying to use multiple agents to train one network.

Is this possible with the plugin, and how if so?

@krumiaa
Copy link
Owner

krumiaa commented Apr 7, 2023

I understand what you mean by non-marl distributed training of a single network, unfortunately at this moment I don’t have a working example of this. I would start by scouring the OpenAI gym and Stablebaselines docs to see if its implemented in any examples there, which would likely mean the same can be done in Mindmaker using similar methods, you would probably need to modify the source a bit. This might be what your looking for
https://github.com/Rohan138/marl-baselines3
https://github.com/HumanCompatibleAI/adversarial-policies/blob/baa359420641b721aa8132d289c3318edc45ec74/src/aprl/envs/multi_agent.py

@oshirokn
Copy link

oshirokn commented Apr 13, 2023

I am working on the same issue. Parallel training is the only missing feature for this plugin before we can do serious research with, compared to using Unity ML-agents or OpenAI Gym.

The problem seems that we need to get the action/observation tuple in UE first, then dispatch it to each agent there. A lot of implementations automatically dispatch the data to each environment, which means we only get a single element in UE, which isn't what we want.

There are a few ways I have tried so far, albeit unsuccessfully. If you have any ideas on how to implement it I'll be interested.

- Implementing a vectorized environment from StableBaselines3 library
Explanation here: https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html

So I end up creating different environments from the DummyVecEnv following the guidelines:

# DummyVecEnv
    def make_env(rank, seed = 0):
        env = UnrealEnvWrap()
        env.seed(rank + seed)
        return env 

    env = DummyVecEnv([make_env(i) for i in range(n_agents)])

However I get the following error :

File "C:\Users\fioshirk\Anaconda3\envs\Mindmaker\lib\threading.py", line 980, in _bootstrap_inner
self.run()
File "C:\Users\fioshirk\Anaconda3\envs\Mindmaker\lib\threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\fioshirk\Anaconda3\envs\Mindmaker\lib\site-packages\socketio\server.py", line 731, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "C:\Users\fioshirk\Anaconda3\envs\Mindmaker\lib\site-packages\socketio\server.py", line 756, in _trigger_event
return self.handlers[namespace]event
File "C:\Users\fioshirk\Desktop\MMS2\25.py", line 243, in receive
env = DummyVecEnv([make_env(i) for i in range(n_agents)])
File "C:\Users\fioshirk\Anaconda3\envs\Mindmaker\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 26, in init
self.envs = [fn() for fn in env_fns]
File "C:\Users\fioshirk\Anaconda3\envs\Mindmaker\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 26, in
self.envs = [fn() for fn in env_fns

TypeError: 'UnrealEnvWrap' object is not callable

- Implementing a custom vectorized environment

Instead of relying on the SB vectorized environment, we can create our own. It would be an environment that can loop through a tuple of environments at each step.

class VectorizedEnvironment(gym.Env): 
    def __init__(self, make_env, n):
        self.envs = tuple(make_env() for _ in range(n))
        print("Environments in Vectorized Environment: ",self.envs)
        
[...]
  def step(self, action):
          global observations
          global reward
          global UEreward
          global UEdone
          global obsflag
          obsflag = 0
 
          # Loop through each environment:
          for env, a in zip(self.envs, action):
              print("Step for environment number ",a)
              # send actions to UE as they are chosen by the RL algorithm

However it seems impossible to pass a tuple for the observation space, the model requires a gym space instead (for example a box). So not sure how to tackle this one either.

I'll have a look at the examples Krumiaa sent and perhaps it would work better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants