You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm learning reinforcement learning now and would like to conduct multi agent car racing RL simulation
using your code.
It seems that your code is well-constructed and easy to understand, but I'm new to here, so have several questions now.
To run the multi-agent car racing simulation with any RL code, is it okay to just place the RL code within the same folder that multi_car_racing.py is in?
To make connection with environment you gave and "my_policy" code(e.g. action = my_policy(obs) in Readme.md),
how can I know the shape of "env" and build the RL code?
Any kind of advice will be appreciated.
Thanks in advance,
Woosuk
The text was updated successfully, but these errors were encountered:
To run the multi-agent car racing simulation with any RL code, is it okay to just place the RL code within the same folder that multi_car_racing.py is in?
To make connection with environment you gave and "my_policy" code(e.g. action = my_policy(obs) in Readme.md),
how can I know the shape of "env" and build the RL code?
In my example, env is an environment object and does not have a shape.
Hello, I'm learning reinforcement learning now and would like to conduct multi agent car racing RL simulation
using your code.
It seems that your code is well-constructed and easy to understand, but I'm new to here, so have several questions now.
To run the multi-agent car racing simulation with any RL code, is it okay to just place the RL code within the same folder that multi_car_racing.py is in?
To make connection with environment you gave and "my_policy" code(e.g. action = my_policy(obs) in Readme.md),
how can I know the shape of "env" and build the RL code?
Any kind of advice will be appreciated.
Thanks in advance,
Woosuk
The text was updated successfully, but these errors were encountered: