New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Parking_her does not work. #392
Comments
Hello, You can try with specified version of SB. Hope this may help you to solve that issue. |
I solved the issue. You also have to set. Working with the parking-env itself, how do we input a desired goal position and initial position/velocity for the agent? |
Hello, For the goal position, you have to modify the parameter lane.position(lane.length/2, 0) in parking-env and replace it with the desired goal position. for the initial position of the vehicle, you have to modify the parameter [i*20, 0] in parking-env and replace it with the desired initial position of the vehicle. Thanks and Regards, |
Is there any way to do this from the created environment rather than from changing the files? The idea is to model uncertainty of the locations within a single script and run multiple simulations. |
Something you could do is to manually override |
@JoshYank in case you want to use latest SB3 and highway env version, you can take a look at the instructions in DLR-RM/stable-baselines3#780 |
Good afternoon, I was trying to train a policy for the parking-env to test against safety validation methods. When I tried to run the code on colab as it is, I was getting an error when creating the environment. AttributeError: 'ParkingEnv' object has no attribute 'np_random' This error could be solved by reinstalling the highway-env or initially installing an older version of gym and highway. After doing this, an error occurs in creating the model before training. TypeError: init() got an unexpected keyword argument 'create_eval_env'
It would be much appreciated if you have any insight on how to solve this problem. My research focuses more on the verification side than training or developing a controller to test. I don't have as much experience in training controllers with RL.
The text was updated successfully, but these errors were encountered: