Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Could you please demonstrate the information of each dimension corresponding to the state dictionary inside the obs in the dataset?" #107

Closed
lijinming2018 opened this issue Jul 12, 2023 · 7 comments

Comments

@lijinming2018
Copy link

lijinming2018 commented Jul 12, 2023

"Could you please demonstrate the information of each dimension corresponding to the state dictionary inside the obs in the dataset?" For example, what information each of the 38 dimensions in the state dictionary of the pickcube environment represents, and what each dimension corresponds to in terms of information.

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Jul 12, 2023

Please see the get_obs function in the parent env (https://github.com/haosulab/ManiSkill2/blob/fc08823bf96791946591a508f336d724fa7cad26/mani_skill2/envs/sapien_env.py#L255) and in the child env for more details. The state contains 2 parts: get_obs_agent() (agent proprioceptive states) and get_obs_extra() (other environment-specific states). Different environments have different state dimensions. The state observation mode has more state dimensions than the visual (rgbd/pointcloud) observation modes, since the former contains ground truth object pose information in get_obs_extra(), while the latter does not.

If you are not using a wrapper (e.g., ManiSkill2-Learn), then if you are under visual observation mode (rgbd & pointcloud) and print out env.get_obs(), the dictionary keys will tell the meanings.

If you are using the state observation mode, then in particular, for PickCube-v0, the state dimension is 51, which contains (1) from get_obs_agent() (see https://github.com/haosulab/ManiSkill2/blob/fc08823bf96791946591a508f336d724fa7cad26/mani_skill2/agents/base_agent.py#L170), agent qpos (state[:9]) & qvel (state[9:18]) & agent base pose (state[18:25]) & controller state (empty) (these dimensions are robot-specific); (2) from get_obs_extra(), agent tcp pose (state[25:32]), goal pos (state[32:35]), tcp to goal pos (state[35:38]), cube pose (state[38:45]), tcp to obj pos (state[45:48]), obj to goal pos (state[48:51])

If you are using ManiSkill2-Learn wrapper and using any of the visual observation modes (which I guess it's your case), see https://github.com/haosulab/ManiSkill2-Learn/blob/83dfe26c73b6ce6b0388a0fa07493f340e36dd44/maniskill2_learn/env/wrappers.py#L235 for the environment wrapper. For the 38-dimensional "state" output from the wrapper, it contains agent qpos (state[:9]) & qvel (state[9:18]) & agent base_pose (state[18:25]) & agent tcp_pose (state[25:32]) & goal pos (state[32:35]) & tcp_to_goal_pos (state[35:38])

@lijinming2018
Copy link
Author

Why does "agent qpos" have 9 dimensions, and what does each dimension represent?

@xuanlinli17
Copy link
Collaborator

The first 7 dimensions are panda joint positions; the last 2 dimensions are panda gripper positions

@lijinming2018
Copy link
Author

what is the mean of the base_pose and tcp_bose

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Jul 24, 2023

Pose = concat(xyz position, rotation), rotation is represented in the format of quaternion in our state space
base_pose = pose of robot base
tcp_pose = pose of robot tool center point (mid point between two gripper fingers)

@MDMLab223
Copy link

May I know the respective value ranges for these 38 dimensions?

@xuanlinli17
Copy link
Collaborator

States corresponding to robot qpos are bounded by joint angle ranges. Other dims are unbounded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants