.. automodule:: mlpro.rl.examples.howto_rl_agent_004_train_multiagent_with_own_policy_on_multicartpole_environment
Prerequisites
- Please install the following packages to run this examples properly:
Executable code
.. literalinclude:: ../../../../../../../../src/mlpro/rl/examples/howto_rl_agent_004_train_multiagent_with_own_policy_on_multicartpole_environment.py :language: python
Results
Similar output as in :ref:`Howto RL-AGENT-002 <Howto Agent RL 002>` is displayed. However, three cartpole windows will be opened while the training log runs through. In addition, the training result folder will contain one more pkl file for the second agent.
Cross Reference