Skip to content

Latest commit

 

History

History
103 lines (63 loc) · 7.55 KB

README.md

File metadata and controls

103 lines (63 loc) · 7.55 KB

Sensor Calibration Tools

Sensor calibration can be split into two categories: intrinsic sensor calibration and extrinsic sensor calibration. In our calibration tools, we implement different methods for both categories.

Extrinsic Calibration

Extrinsic calibration refers to the relative poses among sensors that are provided to ROS2 nodes using the TF interface. Our calibration tools assume a TF diagram like the one presented in Figure 1.

Figure 1. Extrinsic calibration diagram

In our design, base_link corresponds to the projection on the ground of the vehicle's rear axis and each vehicle may possess one or more sensor kits, which is a physical location on the vehicle where sensors are mounted. For example, a normal car would possess one sensor kit mounted on its top, whereas larger vehicles (e.g., a bus) would have several sensor kits distributed along the vehicle.

Although in the diagram present in Figure 1 the TFs from the base to the sensor kits, and from each kit to its sensors are provided, these are not the calibration targets. Instead, we calibrate from the base_link to a particular lidar, then from said lidar to the rest of the lidars, and finally from lidars to cameras. In order to comply with the diagram from Figure 1, the final output of the calibration process becomes:

$T(\text{sensor⎽kit⎽base⎽link}, \text{lidar0⎽base⎽link}) = T(\text{base⎽link}, \text{sensor⎽kit⎽base⎽link})^{-1} \times T(\text{base⎽link}, \text{lidar0⎽base⎽link})$

$T(\text{sensor⎽kit⎽base⎽link}, \text{lidar1⎽base⎽link}) = T(\text{sensor⎽kit⎽base⎽link}, \text{lidar0⎽base⎽link}) \times T(\text{lidar0⎽base⎽link}, \text{lidar1⎽base⎽link})$

$T(\text{sensor⎽kit⎽base⎽link}, \text{camera0/camera⎽link}) = T(\text{sensor⎽kit⎽base⎽link}, \text{lidar0⎽base⎽link}) \times T(\text{lidar0⎽base⎽link}, \text{camera0/camera⎽link})$

where the $T(\text{base⎽link}, \text{sensor⎽kit⎽base⎽link})$ is usually provided by a CAD model or can be simply approximated since it is a convenient frame and does not affect other computations.

Looking at the diagram from Figure 1, we could also directly calibrate all the sensors with respect to the base_link. However, we believe that sensor-sensor calibration provides more accurate and consistent results so we only use one base_link to sensor calibration and from then all other calibrations are performed via pairs of sensors.

Generic calibration

Intended as a proof-of-concept of our calibration API and as a baseline to which to compare automatic calibration tools, this method allows us to directly modify the values of the TF tree with a rviz view to evaluate the tfs and calibration.

Figure 2. Manual calibration

Base-lidar calibration

This calibration method assumes the floor around the vehicle forms a plane and adjusts the calibration tf so that the points corresponding to the ground of the point cloud lie on the XY plane of the base_link. As such, only the z, roll, and pitch values of the tf are calibrated, with the remaining x, y, and yaw values needing to be calibrated via other methods such as manual calibration.

Figure 3. Ground-plane base-lidar calibration (before and after calibration)

Lidar-lidar calibration

To calibrate pairs of lidars, this method uses standard point cloud registration. However, due to the sparsity of traditional lidars, direct registration proves difficult. To address this issue, this method uses an additional point cloud map and a localization system to solvent this limitation during registration.

Figure 4. Map-based calibration

Similar to the map-based method, the mapping-based calibration uses point cloud registration to find the relative poses between lidars. However, instead of relying on a map to circumvent the sparsity and field of view, this method includes a mapping step, which generates a dense local map representation, which is then used for registration.

Figure 5. Mapping-based calibration

Camera-lidar calibration

To calibrate camera-lidar pairs of sensors, a common practice is to compute corresponding pairs of points and then minimize their reprojection error. This calibration method implements a UI to help the selection of these pairs and evaluate their performance interactively.

Fig 6. Interactive camera-lidar calibration UI

This calibration method extends the interactive calibration by performing the corresponding points acquisition automatically via moving a known tag (lidartag) through the shared field of view.

Fig 7. Tag-based camera-lidar calibration

Intrinsic Calibration

Intrinsic calibration is the process of obtaining the parameters that allow us to transform raw sensor information into a coordinate system. In the case of cameras, it refers to the camera matrix and the distortion parameters, whereas in lidars it can refer to the offset and spacing of beams. In our repository, we focus only on camera calibration, since lidar intrinsic calibration is usually vendor-specific (and either the direct parameters or instructions are usually provided by them).

Intrinsic camera calibration

We implement an original intrinsic camera calibrator based on the ROS implementation, adding support for new boards, an improved data collection process, new visualizations, and statistics used to evaluate the obtained parameters.

Traditionally, camera calibration is performed by detecting several views of planar boards in an image, and so, by knowing the board's dimensions, the camera matrix and distortion can be computed. However, this same process can also be performed by obtaining generic camera-object pairs, like the ones obtained during camera-lidar calibration. By reutilizing the calibration points from camera-lidar calibration, we can perform camera intrinsic calibration and camera-lidar extrinsic calibration simultaneously.