-
Notifications
You must be signed in to change notification settings - Fork 799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to import mujoco envs #33
Comments
Hi @aravindr93, this is due to we haven't fully migrated to using the mujoco-py package yet, and rllab has its own mujoco setup instructions available as a bash script here: https://github.com/rllab/rllab/blob/master/scripts/setup_mujoco.sh Alternatively, you can try changing the import statements |
@dementrock Thanks. I'll try it out. I have a related question: how different are the gym environments ported through rllab and the original gym environments. More specifically between I'm trying out a project to test robustness to model errors. For this, I need to perturb the model parameters when required and then test a pre-trained policy on this perturbed model. The env created through gym allows me to do: Alternatively, what is the easiest way to train a policy on the environment created uisng |
Hi @aravindr93, in general environments in gym are more polished, and, to my knowledge, some of the environment parameters are tweaked when ported into gym. If gym's environments fit your need I recommend using them directly. Can you elaborate on why you want to train a policy without using the wrapper? |
OK, I think I didn't explain the issue properly. What I want to do is learn a policy on an environment; perturb the parameters of the environment; and test the returns in the perturbed environments. To perturb the parameters, what I would do is something like this:
(this requires creating a mutable copy as intermediate step). This doesn't seem to be possible when I get the model using GymEnv wrapper.
When we train a policy (say GaussianMLP), we usually pass on an env.spec that comes from a rllab env object. However, this spec is different from the specs of gym. Can you give me a few lines of code to train a policy on a gym environment without the GymEnv() wrapper? Thanks! |
I see. Right now this is not achievable using the current interface. If you look at the implementation of To access the model properly in the last example you can do |
Hi. I tried to import a mujoco environment present in the master branch, and I get an OS error. I have mujoco set-up already and can create the environment from the openAI gym bundle of mujoco-py.
The following two sets of environment creation work.
However, this doesn't work:
Error message:
It's not an issue with ipython, since I get the same error even outside. Thanks :)
The text was updated successfully, but these errors were encountered: