-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I train my environment to results? #3
Comments
Hello, Thanks for your attention. Regarding training your 'scene environment,' we don't know what kind of problem your env is solving, so we can't provide you with a detailed answer. However, if your env is a standard gym-style environment, like https://github.com/openai/gym/blob/master/gym/envs/mujoco/humanoid_v3.py, you just need to create a Or if your environment can be disclosed, we can assist with that as well. |
First of all, thank you very much for your reply. What if I create my own customized environment, such as a drone path planning scenario or a mobile robot path planning scenario, and refer to the MPE environment. How should I train through this code? to obtain correct experimental results |
If your environment can be disclosed, please send your_env.py to xlm2223@gmail.com. We can help you with some standardization. In your_env.py, the model needs to be built correctly, and the STEP function, the REWARD calculation, the state space, the action space, the Done condition, etc. need to be set correctly. You can refer to ‘’https://github.com/Intelligent-Driving-Laboratory/GOPS/tree/dev/gops/env‘’ for a simple design of your env. |
I have contacted you by email, have you received it? Thank you again for your timely reply |
Already got your email, I looked at the repository you posted. You firstly need to write the robot_warehouse interface call, and secondly, refer to a trajectory tracking optimal control task: https://github.com/Intelligent-Driving-Laboratory/GOPS/blob/dev/gops/env/env_ocp/pyth_mobilerobot.py |
Can you guide me on how to access the environment, or provide some specific suggestions on how to modify my task scene environment for this environment |
Please add my wechat: tip911 |
I want to ask, how should I train my scene environment to run, I see you input is a trained file, you can tell me, how should I train my scene environment
The text was updated successfully, but these errors were encountered: