Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with testing in another envrionment #148

Open
sattlelite opened this issue Jun 20, 2024 · 2 comments
Open

Problem with testing in another envrionment #148

sattlelite opened this issue Jun 20, 2024 · 2 comments

Comments

@sattlelite
Copy link

sattlelite commented Jun 20, 2024

Hello, Mr. Reiniscimurs, I have done the test after training, bleow is the average reward function generated by my training of the improved method.In addition, I conducted tests on the original map, and basically reached the target, although there was a collision in a few cases. I then created a 30x30 size map with some sparse and regular obstacles in it and replaced the TD3.world file, and I modified the velodyne_env.py file so that the goal would not fall in the obstacles.The question is, when everything is running properly, the robot will stay in place and rotate at a small Angle to the left and right, and will not reach its destination. The answer seems to be obvious that the robot has never been trained beyond the present situation, so,dose it needs to be trained in a new environment until the positive reward is stable, right? I also wonder if the new envrionment requires more parameters for the TD3 network? If I increase the number of states, do I need a large number of network parameters? Looking forward to your reply. Thank you so much!
改进2-2

@sattlelite
Copy link
Author

Sorry, the top two curves in this graph are 2000 real_time_rate and the bottom is raw time.

@sattlelite sattlelite reopened this Jun 21, 2024
@reiniscimurs
Copy link
Owner

Hi,
No there is no explicit need to train the model in the new environment. The issue here could be that either it has encountered local optima, or the input sensor values are outside of the trained range. For the latter, you could just cap the max sensor values to the max values in training. I.E. distance = min(distance, max_dist_in_training) and same with laser sensors. This should give you a reasonable performance in local planning setting. You of course can also re-train the model. But I am not sure if you would need to increase the model size. The only thing that would realistically change is the distance to the goal as max distance there would grow. And since it would take longer to actually get to the goal, it might be useful to figure out the proper discount factor and learning rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants