Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bullet support #5

Closed
ryanjulian opened this issue Feb 28, 2018 · 8 comments
Closed

Bullet support #5

ryanjulian opened this issue Feb 28, 2018 · 8 comments
Labels

Comments

@ryanjulian
Copy link
Owner

No description provided.

@ryanjulian
Copy link
Owner Author

ryanjulian commented Mar 19, 2018

We would like to add support for the Bullet physics engine to rllab. Thankfully, the Bullet team have recently provided Python bindings in the form of pybullet, and even provides examples of how to implement the gym.Env interface (from OpenAI Gym) using pyBullet.

This task is to add pybullet to the rllab conda environment, and implement a class (similar to GymEnv, e.g. BulletEnv) which allows any rllab algorithm to learn against pybullet environments. You will also need to implement the plot interface, if pybullet does not already, which shows the user a 3D animation of the environment. Essentially, you should duplicate the experience of running one of the MuJoCo-based examples (e.g. trpo_swimmer.py), but using a Bullet environment instead. You should include examples (in examples/ and sandbox/rocky/tf/launchers/) of launcher scripts which use an algorithms (suggestion: TRPO) to train the KukaGymEnv environment.

This is conceptually the same as GymEnv, which allows rllab users to import any OpenAI Gym environment and learn against them. In fact, pybullet environments implement the Gym interface, so in theory we should be done as soon as we can import pybullet. In practice, our constructor for Gym environments only takes the string name (e.g. "Humanoid-v1") of a Gym environment, not the class of a Gym environment. The pybullet environments do not have string shortcuts because they are not part of the official Gym repository. Furthermore, we'd like to use other unofficial Gym environments in rllab, but it is currently difficult for the same reason.

So you might structure this task as two pull requests (1) adding pybullet to the conda environment and (2) Modifying GymEnv to support arbitrary environments which implement the gym.Env interface (attempted in #12).

Consider this a professional software engineering task, and provide a high-quality solution which does not break existing users, minimizes change, and is stable. Please always use PEP8 style in your code, and format it using YAPF (with the PEP8 setting). Submit your pull request against the integration branch of this repository.

Some notes:

  • You can find examples of how to launch rllab in examples and sandbox/rocky/tf/launchers. Note that everything must run using the run_experiment_lite wrapper.
  • rllab currently has two parallel implementation of the neural network portions of the library. The original is written in Theano and is found in rllab/. The tree sandbox/rocky/tf re-implements classes from the original tree using TensorFlow, and is backwards-compatible with the Theano tree. We are working towards using only one NN library soon, but for now your implementation needs to work in both trees.
  • rllab is an upstream dependency to many projects, so it is important we do not break the existing APIs. Adding to APIs is fine as long as there is a good reason.

@Anirudhkashi
Copy link

So, if I understand this correctly,
PR 1) Add pybullet to environment.yml, create a BulletEnv class similar to GymEnv, add examples of existing environments in examples folder.
PR 2) Modifying GymEnv to accept env class object, instead of the string.

@ryanjulian
Copy link
Owner Author

You decide. The goal is to enable Bullet support. Try it out and see what works and makes sense.

As I noted above, since Bullet implements the Gym interface, a lot of the GymEnv code is probably reusable to achieve this, but it might require some modification. If you modify GymEnv, you still need to keep your changes backwards-compatible with the rest of rllab (and test the backwards compatibility)

If you make changes to existing code to support a new feature, a good practice is to split the change into two parts: one for the underlying changes (which enable the new feature) and one for the new feature.

Anirudhkashi added a commit to Anirudhkashi/rllab that referenced this issue Mar 24, 2018
Example added to example folder
ryanjulian#5
@Anirudhkashi
Copy link

Here are the changes I have made as of now: Diff.

  1. I have added the pybullet to conda env, increased OpenAI gym version.
  2. Have provided a way to register any class into the pybullet env: BulletEnv
  3. Have added an example trpo_bullet_kuka.py in examples folder.

Have not added the tf version yet. I am stuck with an issue in which the plot window is duplicating.
Can you please check if this is the correct way?

@ryanjulian
Copy link
Owner Author

ryanjulian commented Mar 25, 2018 via email

@ryanjulian
Copy link
Owner Author

ryanjulian commented Mar 25, 2018 via email

@Anirudhkashi
Copy link

Okay, I will send the PR after I make these changes. I have added few questions for required clarifications on the comments.
Review

Anirudhkashi added a commit to Anirudhkashi/rllab that referenced this issue Mar 27, 2018
Example added to example folder
ryanjulian#5
Anirudhkashi added a commit to Anirudhkashi/rllab that referenced this issue Mar 31, 2018
Example added to example folder
ryanjulian#5
Anirudhkashi added a commit to Anirudhkashi/rllab that referenced this issue Apr 1, 2018
Example added to example folder
ryanjulian#5
@ryanjulian ryanjulian self-assigned this Apr 6, 2018
@ryanjulian ryanjulian removed their assignment May 15, 2018
@ryanjulian ryanjulian added this to the Week of May 21 milestone May 15, 2018
@ryanjulian ryanjulian assigned ghost May 21, 2018
@ryanjulian ryanjulian removed this from the Week of May 21 milestone May 29, 2018
@ryanjulian ryanjulian unassigned ghost May 29, 2018
@ryanjulian
Copy link
Owner Author

See rlworkgroup/garage#46

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants