Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use uniform control policy? #49

Open
lchenat opened this issue Oct 18, 2016 · 8 comments
Open

How to use uniform control policy? #49

lchenat opened this issue Oct 18, 2016 · 8 comments

Comments

@lchenat
Copy link
Contributor

lchenat commented Oct 18, 2016

I want to run my new task in random using uniform_control_policy to get a reference, but I could not figure out what algo should I use. I try to rewrite the batch_plot but I got so many errors. Is there an elegant way to run my task in random?

@dementrock
Copy link
Member

@lchenat
Copy link
Contributor Author

lchenat commented Oct 18, 2016

I tried NOP got the following error:

Traceback (most recent call last):
File "/home/data/lchenat/rllab-master/scripts/run_experiment_lite.py", line 115, in
run_experiment(sys.argv)
File "/home/data/lchenat/rllab-master/scripts/run_experiment_lite.py", line 102, in run_experiment
maybe_iter = concretize(data)
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1018, in concretize
return method(_args, *_kwargs)
File "/home/data/lchenat/rllab-master/rllab/algos/batch_polopt.py", line 250, in train
paths = self.sampler.obtain_samples(itr)
File "/home/data/lchenat/rllab-master/rllab/algos/batch_polopt.py", line 27, in obtain_samples
cur_params = self.algo.policy.get_param_values()
File "/home/data/lchenat/rllab-master/rllab/core/parameterized.py", line 57, in get_param_values
for param in self.get_params(**tags)]
File "/home/data/lchenat/rllab-master/rllab/core/parameterized.py", line 34, in get_params
if tag_tuple not in self._cached_params:
AttributeError: 'UniformControlPolicy' object has no attribute '_cached_params'

I am a bit confused about this, because in parameterized.py _cached_params is actually set:

class Parameterized(object):

def __init__(self):
    self._cached_params = {}
    self._cached_param_dtypes = {}
    self._cached_param_shapes = {}

@dementrock
Copy link
Member

Hmm weird. I cannot reproduce the error on my end. What if in the initializer of UniformControlPolicy, you explicitly invoke Parameterized.init(self)?

@lchenat
Copy link
Contributor Author

lchenat commented Oct 18, 2016

i invoke Parameterized.init(self) in Policy, it seems that I have an old version of RLLAB. I then got this error:

Traceback (most recent call last):
File "/home/data/lchenat/rllab-master/scripts/run_experiment_lite.py", line 115, in
run_experiment(sys.argv)
File "/home/data/lchenat/rllab-master/scripts/run_experiment_lite.py", line 102, in run_experiment
maybe_iter = concretize(data)
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1018, in concretize
return method(_args, *_kwargs)
File "/home/data/lchenat/rllab-master/rllab/algos/batch_polopt.py", line 251, in train
samples_data = self.sampler.process_samples(itr, paths)
File "/home/data/lchenat/rllab-master/rllab/algos/batch_polopt.py", line 73, in process_samples
ent = np.mean(self.algo.policy.distribution.entropy(agent_infos))
AttributeError: 'UniformControlPolicy' object has no attribute 'distribution'

@lchenat
Copy link
Contributor Author

lchenat commented Oct 18, 2016

I am using python 2.7, and I got the same problem after downloading the rllab-py2 again.

@lchenat
Copy link
Contributor Author

lchenat commented Oct 18, 2016

By the way, when I am running multiple python scripts for different algorithms, I got the following errors:

using seed 8
using seed 8
Could not import matplotlib.pyplot, therefore cma.plot()" + etc. is not available
Error while instantiating <class 'rllab.policies.gaussian_mlp_policy.GaussianMLPPolicy'>
Traceback (most recent call last):
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1032, in concretize
_args, *_kwargs)
File "/home/data/lchenat/rllab-master/rllab/policies/gaussian_mlp_policy.py", line 110, in init
super(GaussianMLPPolicy, self).init(env_spec)
File "/home/data/lchenat/rllab-master/rllab/policies/base.py", line 7, in init
super(Policy, self).init()
TypeError: init() takes exactly 2 arguments (1 given)
Traceback (most recent call last):
File "/home/data/lchenat/rllab-master/scripts/run_experiment_lite.py", line 115, in
run_experiment(sys.argv)
File "/home/data/lchenat/rllab-master/scripts/run_experiment_lite.py", line 102, in run_experiment
maybe_iter = concretize(data)
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1014, in concretize
obj = concretize(maybe_stub.obj)
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1029, in concretize
kwargs = concretize(maybe_stub.kwargs)
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1044, in concretize
ret[concretize(k)] = concretize(v)
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 1038, in concretize
ret = maybe_stub.stub_cache
File "/home/data/lchenat/rllab-master/rllab/misc/instrument.py", line 155, in __getattr

raise AttributeError('Cannot get attribute %s from %s' % (item, self.proxy_class))
AttributeError: Cannot get attribute __stub_cache from <class 'rllab.policies.gaussian_mlp_policy.GaussianMLPPolicy'>

Is it ok to run several algorithms in different scripts at the same time?

edit: oh, it is not the problem of running several scripts, it is actually caused by adding "super(Policy, self).init()" in init of policy base class. I added this because it seemed to solve the uniform control policy problem.

@dementrock
Copy link
Member

I just pushed a fix: 55b7df1 to the attribute error issue.

I recommend upgrading to the master branch which uses python 3. Also, it is recommended to explicitly call the superclass constructor instead of using super.

@lchenat
Copy link
Contributor Author

lchenat commented Oct 19, 2016

done! Thanks for your update.

jonashen pushed a commit to jonashen/rllab that referenced this issue May 29, 2018
Design the ros environment support for rllab. Add sawyer simulation support and gazebo environment support.

Refer to: rll#49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants