-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-Agent Setup for one network #31
Comments
I understand what you mean by non-marl distributed training of a single network, unfortunately at this moment I don’t have a working example of this. I would start by scouring the OpenAI gym and Stablebaselines docs to see if its implemented in any examples there, which would likely mean the same can be done in Mindmaker using similar methods, you would probably need to modify the source a bit. This might be what your looking for |
I am working on the same issue. Parallel training is the only missing feature for this plugin before we can do serious research with, compared to using Unity ML-agents or OpenAI Gym. The problem seems that we need to get the action/observation tuple in UE first, then dispatch it to each agent there. A lot of implementations automatically dispatch the data to each environment, which means we only get a single element in UE, which isn't what we want. There are a few ways I have tried so far, albeit unsuccessfully. If you have any ideas on how to implement it I'll be interested. - Implementing a vectorized environment from StableBaselines3 library So I end up creating different environments from the DummyVecEnv following the guidelines:
However I get the following error :
- Implementing a custom vectorized environment Instead of relying on the SB vectorized environment, we can create our own. It would be an environment that can loop through a tuple of environments at each step.
However it seems impossible to pass a tuple for the observation space, the model requires a gym space instead (for example a box). So not sure how to tackle this one either. I'll have a look at the examples Krumiaa sent and perhaps it would work better. |
Hello,
I've been enjoying the plugin a lot, however I was trying to figure out if it's possible to have a non-MARL multi-agent setup, specifically with PPO.
Buy this I mean having multiple agents collect experience for one neural network, which is a very common technique for networks like PPO. I found a section in the wiki talking about setting up multiple clients in order to train multiple networks, but I am trying to use multiple agents to train one network.
Is this possible with the plugin, and how if so?
The text was updated successfully, but these errors were encountered: