Skip to content
Branch: master
Find file History
jsmerel and saran-t Make the lowered non-gaps ground-plane invisible.
PiperOrigin-RevId: 263183194
Latest commit ca66a3c Aug 13, 2019
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
arenas Make the lowered non-gaps ground-plane invisible. Aug 16, 2019
examples
soccer Replace environment interface code with dm_env import. Aug 1, 2019
tasks Add an indexing capability to `observable.MJCFFeature`, and replace e… Jul 26, 2019
walkers Replace environment interface code with dm_env import. Aug 1, 2019
README.md Fix links to README.md files. May 7, 2019
__init__.py Release dm_control.locomotion, containing a multi-agent soccer enviro… Feb 20, 2019
gaps.png
walls.png Release tasks for ICLR paper "Hierarchical Visuomotor Control of Huma… May 3, 2019

README.md

Locomotion task library

This package contains reusable components for defining control tasks that are related to locomotion. New users are encouraged to start by browsing the examples/ subdirectory, which contains preconfigured RL environments associated with various research papers. These examples can serve as starting points or be customized to design new environments using the components available from this library.

Terminology

This library facilitates the creation of environments that require walkers to perform a task in an arena.

  • walkers refer to detached bodies that can move around in the environment.

  • arenas refer to the surroundings in which the walkers and possibly other objects exist.

  • tasks refer to the specification of observations and rewards that are passed from the "environment" to the "agent", along with runtime details such as initialization and termination logic.

Installation and requirements

See the documentation for dm_control.

Quickstart

from dm_control import composer
from dm_control.locomotion.examples import basic_cmu_2019
import numpy as np

# Build an example environment.
env = basic_cmu_2019.cmu_humanoid_run_walls()

# Get the `action_spec` describing the control inputs.
action_spec = env.action_spec()

# Step through the environment for one episode with random actions.
time_step = env.reset()
while not time_step.last():
  action = np.random.uniform(action_spec.minimum, action_spec.maximum,
                             size=action_spec.shape)
  time_step = env.step(action)
  print("reward = {}, discount = {}, observations = {}.".format(
      time_step.reward, time_step.discount, time_step.observation))

dm_control.viewer can also be used to visualize and interact with the environment, e.g.:

from dm_control import viewer

viewer.launch(environment_loader=basic_cmu_2019.cmu_humanoid_run_walls)

Publications

This library contains environments that were adapted from several research papers. Relevant references include:

You can’t perform that action at this time.