Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I train my environment to results? #3

Open
Yu-zx opened this issue May 6, 2024 · 7 comments
Open

How can I train my environment to results? #3

Yu-zx opened this issue May 6, 2024 · 7 comments

Comments

@Yu-zx
Copy link

Yu-zx commented May 6, 2024

I want to ask, how should I train my scene environment to run, I see you input is a trained file, you can tell me, how should I train my scene environment

@drsssssss
Copy link
Collaborator

Hello, Thanks for your attention.

Regarding training your 'scene environment,' we don't know what kind of problem your env is solving, so we can't provide you with a detailed answer.

However, if your env is a standard gym-style environment, like https://github.com/openai/gym/blob/master/gym/envs/mujoco/humanoid_v3.py, you just need to create a env_gym/gym_xxxx_data.py and then gym.make it as same as other gym envs. And replace the env_id parameter of example_train/main.py or example_train/dsac_mlp_humanoidconti_offserial.py with gym_xxxx. Then you can train it.

Or if your environment can be disclosed, we can assist with that as well.

@Yu-zx
Copy link
Author

Yu-zx commented May 6, 2024

First of all, thank you very much for your reply. What if I create my own customized environment, such as a drone path planning scenario or a mobile robot path planning scenario, and refer to the MPE environment. How should I train through this code? to obtain correct experimental results

@drsssssss
Copy link
Collaborator

If your environment can be disclosed, please send your_env.py to xlm2223@gmail.com. We can help you with some standardization.

In your_env.py, the model needs to be built correctly, and the STEP function, the REWARD calculation, the state space, the action space, the Done condition, etc. need to be set correctly. You can refer to ‘’https://github.com/Intelligent-Driving-Laboratory/GOPS/tree/dev/gops/env‘’ for a simple design of your env.

@Yu-zx
Copy link
Author

Yu-zx commented May 9, 2024

I have contacted you by email, have you received it? Thank you again for your timely reply

@drsssssss
Copy link
Collaborator

drsssssss commented May 9, 2024

Already got your email, I looked at the repository you posted.

You firstly need to write the robot_warehouse interface call, and secondly, refer to a trajectory tracking optimal control task: https://github.com/Intelligent-Driving-Laboratory/GOPS/blob/dev/gops/env/env_ocp/pyth_mobilerobot.py
Ensure that your_env_data have the right functions for reset function, step function, work_space, action_space and observation_space, and design the right feedback, you don't need the ‘constraint’ related ones in this environment!

@Yu-zx
Copy link
Author

Yu-zx commented May 9, 2024

Can you guide me on how to access the environment, or provide some specific suggestions on how to modify my task scene environment for this environment

@drsssssss
Copy link
Collaborator

Please add my wechat: tip911

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants