You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
awesome project - thanks for sharing your code.
I would love to try the algorithm on the hexapod I recently build. What are the necessary steps to do that? Create a custom Gym wrapper? Is there a guideline on how to find the correct config for such a custom setup/robot?
Thanks
The text was updated successfully, but these errors were encountered:
Hi @defrag-bambino, I would recommend to use DreamerV3 directly, using the small model config and with --run.script parallel --run.train_ratio -1. You don't need to specify anything else, but you can explicitly specify the CNN/MLP inputs in the config if you don't want the agent to look at all observation keys.
You can return your env from the make_env() function in train.py. The env should either directly implement the Embodied API or Gym/DMEnv and then use the FromGym or FromDM wrappers to turn them into Embodied envs.
Hi,
awesome project - thanks for sharing your code.
I would love to try the algorithm on the hexapod I recently build. What are the necessary steps to do that? Create a custom Gym wrapper? Is there a guideline on how to find the correct config for such a custom setup/robot?
Thanks
The text was updated successfully, but these errors were encountered: