-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manual Control of Robot Arm By Human/User #8
Comments
Hi, @Vidharth Yes, you can manually control the robot by passing your own action into the import gym
import panda_gym
env = gym.make('PandaPickAndPlace-v1')
env.reset()
env.step([1, 0, 0, 0]) # Go FORWARD
env.step([-1, 0, 0, 0]) # Go BACKWARD
env.step([0, 1, 0, 0]) # Go LEFT
env.step([0, -1, 0, 0]) # Go RIGHT
env.step([0, 0, 1, 0]) # Go UP
env.step([0, 0, -1, 0]) # Go DOWN
env.step([0, 0, 0, 1]) # OPEN fingers
env.step([0, 0, 0, -1]) # CLOSE fingers Another example, where action depends on the observation: import gym
import panda_gym
env = gym.make('PandaPush-v1')
obs = env.reset()
# Get specific observations
ee_position = obs['observation'][0:3]
object_position = obs['observation'][7:10]
# Choose action according to object position and end-effector position
action = object_position - ee_position # move in the direction of the object
# Step
obs, reward, done, info = env.step(action) Each envrionment has its own observation and action space. Read the following for more details. You should be able to adapt this code to do keyboard-control, if that's what you want to do (using the For
|
Hey @qgallouedec, If the position of the end-effector is positioned in the observation space at positions [0:3] and for the object at positions [7:10] for tasks PandaPush and PandaSlide, where are the ee-position, object position and finger width in PandaPickAndPlace and PandaStack tasks? |
You can infer it the same way from my previous comment:
|
Is it possible or is there some way to control the robots manually in the panda gym environment in order to capture recordings of demonstrations.
By manually I mean, instead of an agent predicting actions at every step, is it possible or is there some way for a human/user to control the robot using keyboard mappings or something like that.
The text was updated successfully, but these errors were encountered: