Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about dataset setup #1

Closed
MathLens opened this issue Feb 25, 2021 · 10 comments
Closed

Questions about dataset setup #1

MathLens opened this issue Feb 25, 2021 · 10 comments

Comments

@MathLens
Copy link

Hello, thanks for publishing the dataset!

I wonder if the ground truth trajectory is gravity aligned (i.e., gravity direction is -z).
And are there significant motion along gravity direction? I saw about 1 m changes in z in the sequence building1/train/0.feather.

@scottsun95
Copy link
Collaborator

The ground truth provided is relative to the initialization, which is consistent for trajectories in a given building. We tried to keep the initialization as close as possible to a gravity-aligned frame, but there sometimes is a small rotational offset. These can be calibrated to a gravity-aligned frame, but the offsets are also small enough that they don't contribute to appreciable differences in the x-y trajectory positions, which is what we were mainly concerned about.

There is an initial jump in the z position because we place the data collection rig close to ground level at the start for alignment purposes and then pick it up. No other significant motion besides how the subjects carried the rig.

@MathLens
Copy link
Author

I see. Thanks!

@MathLens
Copy link
Author

MathLens commented Mar 3, 2021

@scottsun95 Sorry, I have more questions.

  1. Following my previous question, I think the global frame of the ground truth depends on the LiDAR-Visual-Inertial SLAM system you are using. Do you know how it defines its global frame? Does it initialize the global frame to be gravity-aligned or it just set the first orientation to be identity? If the former, then I believe the ground truth is always gravity-aligned no matter what your trajectory is.

  2. Are the gyro bias and accel bias of the imu sensor available? Or the bias has been subtracted from the imu data?

  3. There is a transformation between the phone's frame and the rig's frame. Do you have the relative rotation between them or users need to calibrate the extrinsics?

@MathLens MathLens reopened this Mar 3, 2021
@scottsun95
Copy link
Collaborator

  1. We use a system called the Kaarta Stencil, which defines the initialization as the global frame. Unfortunately, it does not do gravity-alignment for us.

  2. Most trajectories contain a period in the beginning where the rig is static, which can be used to estimate the gyro biases. The data, as presented, are the raw sensor readings with no additional compensation or processing. We don't find it makes much difference to our model when we applied an initial correction for these bias terms.

  3. Regarding the transformation between the frames, the user would need to calibrate the extrinsics. We ran Kalibr on our data to obtain this rotation matrix to convert from phone to ground truth frame:

[[-0.999870,  0.006520, -0.014725],
 [-0.006843, -0.999734,  0.022022],
 [-0.014578,  0.022120,  0.999649]]

Approximately, this alignment can be visualized as phone x -> global -x, phone y -> global -y, phone z -> global z.

@MathLens
Copy link
Author

MathLens commented Mar 9, 2021

For 3, I guess you mean this is the rotation matrix converting a vector in the phone's frame to the rig's frame? (I was not asking about the global frame defined by the ground truth from the SLAM in this question)

And do you have the position extrinsics between the phone and the rig? You mentioned it is not big in the dataset description, but I found some data where the phone's accel differs a lot from the rig's accel.

For building1/unknown/8.weather, I ran

df = pd.read_feather("8.feather")
df[['stencilAccX','stencilAccY','stencilAccZ','iphoneAccX','iphoneAccY','iphoneAccZ']]

and got

	stencilAccX	stencilAccY	stencilAccZ	iphoneAccX	iphoneAccY	iphoneAccZ
0	-2.636429	0.372649	8.562832	-0.265137	0.013412	-0.916641
1	-2.505764	0.346467	8.596544	-0.265136	0.013412	-0.916641
2	-2.304206	0.433652	8.656816	-0.247406	0.010956	-0.915405
3	-2.021069	0.539020	8.820724	-0.247406	0.010956	-0.915406
4	-1.855819	0.592058	8.881533	-0.242020	0.018982	-0.937683
...	...	...	...	...	...	...
56837	-1.044562	0.711571	8.163866	-0.175450	0.020926	-0.873350
56838	-0.933573	0.596143	8.093647	-0.180524	0.007109	-0.851277
56839	-0.912070	0.467795	8.179072	-0.186567	-0.006838	-0.843655
56840	-0.870791	0.452930	8.375129	-0.189418	-0.015101	-0.859506
56841	-0.809090	0.465006	8.621552	-0.190303	-0.015969	-0.893403

Do you have any idea why the phone's accel's norm is small while the rig's is about 8~10?

@DennisMelamed
Copy link
Collaborator

For the phone vs stencil acceleration, the phone measures the acceleration in Gs (9.8 m/s^2), while the stencil reports values in m/s^2. Multiplying the phone accelerations by 9.8 should align the norms.

@MathLens
Copy link
Author

MathLens commented Mar 9, 2021

@DennisMelamed Thanks for the explanation of iphone's scaling!
@scottsun95 I still have some concern about the gravity alignment. For building1/unknown/8.weather, I plot the trajectory and it seems that

  1. The global frame is far from gravity-aligned, which makes x-y position displacements not perpendicular to gravity.
  2. Also, it seems that the person was walking along the same route for several loops, while the trajectory of each loop do not align well. Does this suggest that the SLAM has huge errors here or the person was actually moving in such a trajectory?
    image

@scottsun95
Copy link
Collaborator

Yes, there appears to be some drift in the z axis for this particular trajectory as a result of the SLAM rig. The scale on this plot seems to greatly exaggerates the drift though, as it looks like there's about 1m of vertical drift for a trajectory that spans over 20m horizontally. The resulting angle from taking the arctan is ~3 degrees.

@MathLens
Copy link
Author

Makes sense. Thanks!

@dream-will
Copy link

dream-will commented Aug 9, 2021

  1. We use a system called the Kaarta Stencil, which defines the initialization as the global frame. Unfortunately, it does not do gravity-alignment for us.
  2. Most trajectories contain a period in the beginning where the rig is static, which can be used to estimate the gyro biases. The data, as presented, are the raw sensor readings with no additional compensation or processing. We don't find it makes much difference to our model when we applied an initial correction for these bias terms.
  3. Regarding the transformation between the frames, the user would need to calibrate the extrinsics. We ran Kalibr on our data to obtain this rotation matrix to convert from phone to ground truth frame:
[[-0.999870,  0.006520, -0.014725],
 [-0.006843, -0.999734,  0.022022],
 [-0.014578,  0.022120,  0.999649]]

Approximately, this alignment can be visualized as phone x -> global -x, phone y -> global -y, phone z -> global z.

@scottsun95 Hello, I wonder the rotation matrix can transform phoneGyro -> StencilGyro, but it can't transform phoneAcc -> StencilAcc, I'm confused about the rotation matrix between phone and stencil?I wonder whether the stencil orient[W,X,Y,Z] and processedPos[X,Y,Z] can represent the phone global orientation and global position without any transform?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants