Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Body tracking - Joint information [question] #1414

Open
AswinkarthikeyenAK opened this issue Nov 12, 2020 · 14 comments
Open

Body tracking - Joint information [question] #1414

AswinkarthikeyenAK opened this issue Nov 12, 2020 · 14 comments
Assignees
Labels
Body Tracking Issue related to the Body Tracking SDK Documentation This issue affects the documentation

Comments

@AswinkarthikeyenAK
Copy link

Hi,
I am using ubuntu 18.04
The joint body tracking frames shown in the kinect k4abt_simple_3d_viewer seems to be different compared to the joint data information shown in the documents.
For instance, in the picture below, the frames of right hand is seen
image

In the body tracking joint information from microsoft website, the frames of the right hand are different when compared with the data from the k4abt_simple_3d_viewer
image
image

If you compare the figures specifically looking at the the right hand, Y is in front and Z is Down in the documentation, but the data observed from k4abt_simple_3d_viewer for the right hand side, Y and Z are different.

I am not sure if i Understand this correctly, Can you please provide me with more information that would help me understand it better?

Thanks

@qm13 qm13 added the Triage Needed The Issue still needs to be reviewed by Azure Kinect team members. label Nov 12, 2020
@qm13 qm13 self-assigned this Nov 12, 2020
@qm13 qm13 added Ask a Question A question for the community Body Tracking Issue related to the Body Tracking SDK and removed Triage Needed The Issue still needs to be reviewed by Azure Kinect team members. labels Nov 12, 2020
@Chris45215
Copy link

Seeing as someone has started the topic, it would be very helpful if the orientation image in the documentation could be updated so that it is more clear. Any axis that happens to be along the black line of the bone is hidden by the black line, which makes it a bit difficult to tell which joint that axis line is for. For example, in the torso, the X axes are covered by that black line, so it takes a moment to figure out that the X is pointing up, rather than downwards. This isn't too bad on the torso, but it is bad around the left shoulder because there are two "X"s between the figure's left clavicle and left shoulder, but I can't figure out what joint they are for.

I suggest fixing it by making the orientation markers bolder, so they can be seen even if the are 'behind' the bone.

@qm13 qm13 added Documentation This issue affects the documentation and removed Ask a Question A question for the community labels Nov 30, 2020
@qm13 qm13 added the Triage Approved The Issue has been approved by an Azure Kinect team member. label Mar 4, 2021
@qm13 qm13 added Investigating Dev team needs to Investigate and removed Triage Approved The Issue has been approved by an Azure Kinect team member. labels Mar 24, 2021
@diablodale
Copy link

diablodale commented Jun 28, 2021

Hi @qm13. I'm blocked 🧱 on the lack of accurate joint orientation documentation. This issue and numerous others all highlight the lack of detail and correctness of the existing joint orientation doc at https://docs.microsoft.com/en-us/azure/kinect-dk/body-joints.

Here are a short list of errors and gaps. There may be more

  • no clear definition of "axis orientation". Microsoft (an uber 1st party) is referring to a indistinct 3rd party's definition/docs. Using google search, I can find no "axis orientation" that applies to this topic. I can find the unrelated bind orientation, which is dependent on a puppets relaxed rigging (elephant is different than human different than stork).
  • I and many other customers are not using Unity, Unreal, Maya, etc. Instead, we are building robots, retail systems, museum installations, etc. We have no knowledge, do not need such knowledge, and do not use such 3rd party puppet tools. We need to know definitions of coordinate systems independent of such puppet tools. And given those tools can disappear, we need to be able to read and use Microsoft documentation. Not a transitory 3rd party tool's doc.
  • All sample code I've found are for these puppet tools. Therefore, customers not using puppets -- can not learn from them.
  • the doc url above conflicts with itself first writing "all joint coordinate systems are absolute...relative to the depth...coordinate system." Yet, the next paragraph writes "joint coordinates are in axis orientation". Which is it? Is is absolute always to the sensor? Or is it the mystery "axis orientation" relative to...whatever axis orientation is.
  • No definition of reference coordinate system. My goal is to have the same coordinate system as Kinect v1 and v2. However, I am not able to achieve that without clearly knowing reference coordinate system, handedness, etc.
  • Picture in doc is drawn unclear with axis overlapping each other and not able to discern which X label goes which which joint.
  • Picture in doc is incorrect. There are joints in the picture that do not exist in the 32 joints returned by k4abt. For example, from hip->foot is only 4 in k4abt. In this picture there are 6. There may be more joint errors.
  • Picture in doc doesn't match output of k4abt_simple_3d_viewer.exe. For example, the axis drawn on the various spinal column joints doesn't match the picture and the app. Often they are 180 degrees rotated.
  • Undefined left/right-handed coordinate system. The (unclear) diagram suggests the coordinate system is right-handed. However, 3rd party tools like Unity and Maya are left-handed coordinate systems. That means that the "Kinect is like 3rd party system" is incorrect. Or the diagram is incorrect. Handedness must match.

In #687 you wrote "You can implement a simple translation to covert to the legacy Kinect for Windows". This isn't true. No translate will transform an orientation from Kinect Azure -> kinect v2. There might be a transform or a rotation but never a translate. I request the sample C code to do such a transform. It should be simple as you wrote. 😉

k4a_quaternion_t k4abt_orient_to_kfw2(k4a_quaternion_t k4a_quat) {
    // simple code
}

kfw2_pelvis = k4abt_orient_to_kfw2(k4abt_skeleton_t::joints[K4ABT_JOINT_PELVIS].orientation);

The output of this function for the pelvis should have an almost identical quaternion as if I used a Kinect for Windows (v2) and got the same root node pelvis. And this quaternion is in the same Kv2 camera space https://docs.microsoft.com/en-us/previous-versions/windows/kinect/dn785530(v=ieb.10)#camera-space

I tried 13 permutations (out of the 432 possible) of wxyz in both +/-. It is too many permutations to try by brute force. Uncountable numbers of composite quat rotations. Sample code doesn't help as its all for puppet tools. And the doc is unclear and in some cases wrong above. 😥

@knat
Copy link

knat commented Jul 1, 2021

@diablodale, you can rotate the kinect quaternion to get what you want. To rotate a quaternion, multiply the quaternion with another quaternion.

Demo using DirectXMath:

//XMQuaternionRotationRollPitchYaw(XMConvertToRadians(-90/*90, 0, -90, try :)*/), XMConvertToRadians(90/*90, 0, -90, try :)*/), XMConvertToRadians(0/*90, 0, -90, try :)*/)) = { -0.5f, 0.5f, 0.5f, 0.5f };
static const XMVECTORF32 g_rotSpine = { -0.5f, 0.5f, 0.5f, 0.5f };//to get mirrored(think kinect as a mirror), invert the sign of w and one of x, y or z: { -0.5f, 0.5f, -0.5f, -0.5f }

static const XMVECTORF32 g_rotLeftArm ={/*try :)*/};
static const XMVECTORF32 g_rotLeftHand ={/*try :)*/};
static const XMVECTORF32 g_rotLeftHip = {/*try :)*/};
static const XMVECTORF32 g_rotLeftKnee = {/*try :)*/};
static const XMVECTORF32 g_rotLeftAnkle = {/*try :)*/};
static const XMVECTORF32 g_rotLeftFoot = {/*try :)*/};

static const XMVECTORF32 g_rotRightArm ={/*try :)*/};
...

inline static XMVECTOR XM_CALLCONV GetQuaternion(k4abt_joint_t const& joint) {
    auto const& qua = joint.orientation;
    return XMVectorSet(qua.wxyz.x, qua.wxyz.y, qua.wxyz.z, qua.wxyz.w);
}

XMVECTOR quaPELVIS = XMQuaternionMultiply(g_rotSpine, GetQuaternion(skeleton.joints[K4ABT_JOINT_PELVIS]));
XMVECTOR quaSPINE_NAVEL = XMQuaternionMultiply(g_rotSpine, GetQuaternion(skeleton.joints[K4ABT_JOINT_SPINE_NAVEL]));
...
XMVECTOR quaCLAVICLE_LEFT = XMQuaternionMultiply(g_rotLeftArm, GetQuaternion(skeleton.joints[K4ABT_JOINT_CLAVICLE_LEFT]));
XMVECTOR quaSHOULDER_LEFT = XMQuaternionMultiply(g_rotLeftArm, GetQuaternion(skeleton.joints[K4ABT_JOINT_SHOULDER_LEFT]));
...

@diablodale
Copy link

diablodale commented Jul 1, 2021

Hi. Unfortunately, rotating the quat is not enough. I've brute-forced one joint...the waist. I had to permutate the quat's 3 imaginary components and rotate it.

Rotation only moves the axes to align. What is missing is the rotations around those axes. For example, the waist joint on the Azure Kinect rotates around the Z axis and X follows the bone. The Kinect v1 and 2 rotates around the X axis and Y follows the bone. This makes it necessary to permutate and rotate.

I have to understand the global coordinate system (the well known Kinect v1 and v2) and the local coordinate system (mystery Azure Kinect). And from what I have so far discovered, the Azure Kinect local coordinate system changes on spine, each arm, and each legs. I can already see the permutation of quat imaginaries and the rotation changes for waist, left hip, right hip. I can't reuse my code for the waist. I have to brute force each joint again and have a giant switch() so I can permutate the imaginaries.

I've been coding Kinect solutions for 8 years. I can do it when I have enough documentation. Otherwise, I'm reverse-engineering and brute-forcing it. 😢

The Azure Kinect team may not know how to do this. They used synthetic data to train their neural net. Puppet data in -> puppet data out, so they never needed to logic/analyze it and may treat it as a black box. This is very typical of neural net solutions. I could be wrong here but it matches a few related posts and years-delay not documenting it.

@knat
Copy link

knat commented Jul 2, 2021

Hi. We can get the following result, after rotating kinect quaternion, if body is in rest-pose(T-pose), EVERY bone orientation is +Y:
2
Call it rest-quaternion.

If body is not in rest-pose, bone orientation is:
3
Call it active-quaternion.

You can convert quaternion to Euler angles: https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles

@diablodale
Copy link

Thanks for your post. I can see you want to assist, but I'm unclear what you are communicating in your last post. 🤷‍♀️🙂 Kinect v1 and v2 "TPose" rotate joints on different axes than Kinect v3 (aka Azure Kinect).
Kinect v1 and v2 are as your draw immediately above. Bone on Y. Rotation around X.
Kinect v3 is probably bone on X. Probably rotation around Z. https://docs.microsoft.com/en-us/azure/kinect-dk/body-joints

My approach is brute force. I have 96 possibilities of quat coefficient permutations and sign changes. I prioritize sign changes using levi-civita first, then fallback to the remaining possibilities. I have no need to convert to Euler angles. That would add computation for no value, and introduces order of rotation and potentially gimbal lock. I think I can stay within quats and avoid changing rotation methods just to reverse-engineer. 🤞

@diablodale
Copy link

diablodale commented Jul 2, 2021

Other inconsistences I've discovered

  • Kinect Azure swaps the parent-child relationship for joint orientation. Kinect v1 and v2 rotations describe joint->parent bone. Kinect v3 (Azure) describes joint->child bone.
  • Kinect Azure left and right hip joints do not follow a consistent rule that the bone from joint to its parent joint is on X axis.
    Instead, the Azure Kinect left and right hip joints have this bone on Z axis. This is seen in both the current doc's diagram and in
    real world data I'm rendering. And it related to above bullet.
  • Azure Kinect provides no rotation data to describe the bone between the pelvis and two hips. Those two rotations and bones need to have a fixed value provided by the app itself.
  • Azure Kinect ankle and foot are actually orienting towards a missing "heel" ankle orientation seems to actually be a missing "heel" joint #1637 -- not the bone between the ankle and foot joints

It is these types of technical details that are needed in the documentation 😟🤹‍♀️

@qm13 qm13 removed the Investigating Dev team needs to Investigate label Aug 16, 2021
@danzeeeman
Copy link

Hi. Unfortunately, rotating the quat is not enough. I've brute-forced one joint...the waist. I had to permutate the quat's 3 imaginary components and rotate it.

Rotation only moves the axes to align. What is missing is the rotations around those axes. For example, the waist joint on the Azure Kinect rotates around the Z axis and X follows the bone. The Kinect v1 and 2 rotates around the X axis and Y follows the bone. This makes it necessary to permutate and rotate.

I have to understand the global coordinate system (the well known Kinect v1 and v2) and the local coordinate system (mystery Azure Kinect). And from what I have so far discovered, the Azure Kinect local coordinate system changes on spine, each arm, and each legs. I can already see the permutation of quat imaginaries and the rotation changes for waist, left hip, right hip. I can't reuse my code for the waist. I have to brute force each joint again and have a giant switch() so I can permutate the imaginaries.

I've been coding Kinect solutions for 8 years. I can do it when I have enough documentation. Otherwise, I'm reverse-engineering and brute-forcing it. 😢

The Azure Kinect team may not know how to do this. They used synthetic data to train their neural net. Puppet data in -> puppet data out, so they never needed to logic/analyze it and may treat it as a black box. This is very typical of neural net solutions. I could be wrong here but it matches a few related posts and years-delay not documenting it.

did you ever solve this I'm running into the same issues at the moment the

@diablodale
Copy link

Yes. I brute forced it over almost 3 weeks. It was tedious, grueling, grunt work caused by Microsoft's ongoing lack of documentation and API specs. 😕

Visual test harness, joint by joint, establishing base coord system for each joint (it changes), visually testing all quat permutation rotations for each joint, moving my own body for each perm and visually looking at the drawn effect to see the results. Iterating more than once to get all correct over the range of possible rotations. Sometimes pain in my left shoulder as this kind of repetitive movement is unnatural.

If anyone is interested, given the significant time and physical effort involved, I will gladly share my results for a fee + nda/license to not share the results further. Price will scale exponentially based on if the interested party makes depth sensors and/or operating systems. 😊

@danzeeeman
Copy link

Yes. I brute forced it over almost 3 weeks. It was tedious, grueling, grunt work caused by Microsoft's ongoing lack of documentation and API specs. 😕

Visual test harness, joint by joint, establishing base coord system for each joint (it changes), visually testing all quat permutation rotations for each joint, moving my own body for each perm and visually looking at the drawn effect to see the results. Iterating more than once to get all correct over the range of possible rotations. Sometimes pain in my left shoulder as this kind of repetitive movement is unnatural.

If anyone is interested, given the significant time and physical effort involved, I will gladly share my results for a fee + nda/license to not share the results further. Price will scale exponentially based on if the interested party makes depth sensors and/or operating systems. 😊

can I buy you a beer?

@diablodale
Copy link

I do like IPA 😉. It would be 3 weeks * 8 hours/day * my hourly rate of IPAs 🍻 More if you make sensors/os.
I understand your interest. If you want to discuss specific terms, hit me up via email as listed in my profile.
Naturally, Microsoft can generate this needed info dramatically cheaper given they own the spec and the code.

@danzeeeman
Copy link

Can someone at Microsoft address this so we don't have to pay highway robbery to some disgruntled German dev....

@Chris45215
Copy link

Chris45215 commented Sep 15, 2021 via email

@danzeeeman
Copy link

Right now when I compute the relative joint orientations and apply them to parented nodes things are just wrong....

I have to flip the pelvis 90 X, 90 Z to get it upright.
I have to flip the hips 180 on an axis.
I'm stuck on the clavicles trying to get the shoulders in the right place....when I ignore the nesting and just do everything in world pos/orientation everything works fine but I'm trying to use the skeleton data as a parented skeleton....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Body Tracking Issue related to the Body Tracking SDK Documentation This issue affects the documentation
Projects
None yet
Development

No branches or pull requests

6 participants