Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to import mujoco envs #33

Closed
aravindr93 opened this issue Jul 21, 2016 · 5 comments
Closed

Unable to import mujoco envs #33

aravindr93 opened this issue Jul 21, 2016 · 5 comments

Comments

@aravindr93
Copy link

Hi. I tried to import a mujoco environment present in the master branch, and I get an OS error. I have mujoco set-up already and can create the environment from the openAI gym bundle of mujoco-py.

The following two sets of environment creation work.

import gym
gym_env = gym.make('Hopper-v1')
from rllab.envs.gym_env import GymEnv
rllab_env1 = GymEnv("Hopper-v1")

However, this doesn't work:

import rllab.mujoco_py
from rllab.envs.mujoco.hopper_env import HopperEnv

Error message:

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-5-88221ef8d105> in <module>()
----> 1 import rllab.mujoco_py
      2 from rllab.envs.mujoco.hopper_env import HopperEnv

/home/aravind/Programs/rllab/rllab/mujoco_py/__init__.py in <module>()
----> 1 from .mjviewer import MjViewer
      2 from .mjcore import MjModel
      3 from .mjcore import register_license
      4 import os
      5 from mjconstants import *

/home/aravind/Programs/rllab/rllab/mujoco_py/mjviewer.py in <module>()
----> 1 import glfw
      2 from mjlib import mjlib
      3 from ctypes import pointer, byref
      4 import ctypes
      5 import mjcore

/home/aravind/Programs/rllab/rllab/mujoco_py/glfw.py in <module>()
    134 
    135 
--> 136 _glfw = _load_library()
    137 if _glfw is None:
    138     raise ImportError("Failed to load GLFW3 shared library.")

/home/aravind/Programs/rllab/rllab/mujoco_py/glfw.py in _load_library()
     76     else:
     77         raise RuntimeError("unrecognized platform %s"%sys.platform)
---> 78     return ctypes.CDLL(libfile)
     79 
     80 

/home/aravind/anaconda2/envs/rllab/lib/python2.7/ctypes/__init__.pyc in __init__(self, name, mode, handle, use_errno, use_last_error)
    363 
    364         if handle is None:
--> 365             self._handle = _dlopen(self._name, mode)
    366         else:
    367             self._handle = handle

OSError: /home/aravind/Programs/rllab/vendor/mujoco/libglfw.so.3: cannot open shared object file: No such file or directory

It's not an issue with ipython, since I get the same error even outside. Thanks :)

@dementrock
Copy link
Member

Hi @aravindr93, this is due to we haven't fully migrated to using the mujoco-py package yet, and rllab has its own mujoco setup instructions available as a bash script here: https://github.com/rllab/rllab/blob/master/scripts/setup_mujoco.sh

Alternatively, you can try changing the import statements import rllab.mujoco_py into import mujoco_py. I expect this to mostly work except maybe a few interface differences that should be easy to fix.

@aravindr93
Copy link
Author

aravindr93 commented Jul 21, 2016

@dementrock Thanks. I'll try it out.

I have a related question: how different are the gym environments ported through rllab and the original gym environments. More specifically between gym_env and rllab_env1 in my prev comment.

I'm trying out a project to test robustness to model errors. For this, I need to perturb the model parameters when required and then test a pre-trained policy on this perturbed model. The env created through gym allows me to do: gym_env.model.<data> to change whatever I want. However, I don't get the same functionality with rllab_env1. Is it the same case if I import HopperEnv from rllab.envs.mujoco.hopper_env ?

Alternatively, what is the easiest way to train a policy on the environment created uisng gym_env = gym.make('Hopper-v1') without using the GymEnv wrapper? Thanks!

@dementrock
Copy link
Member

Hi @aravindr93, in general environments in gym are more polished, and, to my knowledge, some of the environment parameters are tweaked when ported into gym. If gym's environments fit your need I recommend using them directly.

Can you elaborate on why you want to train a policy without using the wrapper?

@aravindr93
Copy link
Author

aravindr93 commented Jul 21, 2016

OK, I think I didn't explain the issue properly.

What I want to do is learn a policy on an environment; perturb the parameters of the environment; and test the returns in the perturbed environments. To perturb the parameters, what I would do is something like this:

import gym
gym_env = gym.make('Hopper-v1')
gym_env.model.body_mass = new_mass

(this requires creating a mutable copy as intermediate step). This doesn't seem to be possible when I get the model using GymEnv wrapper.

from rllab.envs.gym_env import GymEnv
rllab_env = GymEnv("Hopper-v1")
print rllab_env.model
AttributeError: 'GymEnv' object has no attribute 'model'

When we train a policy (say GaussianMLP), we usually pass on an env.spec that comes from a rllab env object. However, this spec is different from the specs of gym. Can you give me a few lines of code to train a policy on a gym environment without the GymEnv() wrapper? Thanks!

@dementrock
Copy link
Member

I see. Right now this is not achievable using the current interface. If you look at the implementation of GymEnv though, it's just a very thin wrapper on top of environments from gym. You could write a similar class that takes in an already constructed gym environment as input, rather than just the ID of the environment.

To access the model properly in the last example you can do print rllab_env.env.model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants