Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions docs/validating-cuvslam-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,14 +185,15 @@ The purpose of this test is to validate that the system is able to appropriately

> **Note**: To maximize the accuracy of this test, ensure that dynamic obstacles are removed from the robot's field of view. If a nontrivial dynamic obstacle is introduced while the robot is performing the loop, please restart this test.

4. As the camera nears the completion of its first lap, place the camera at rest, taking care to precisely align it with the floor marker indicating the starting pose.
5. On the standalone computer and within the RViz window, ensure that a nontrivial number of visual features now appear red. Additionally, ensure that the camera frame appears nearly coincident with the `/map` frame.
4. As the camera nears the completion of its first lap, move the camera in a small circle of radius ~0.5m around the origin of the starting pose. This step helps ensure that the previously-seen visual features are recognized and matched.
5. Place the camera at rest, taking care to precisely align it with the floor marker indicating the starting pose.
6. On the standalone computer and within the RViz window, ensure that a nontrivial number of visual features now appear red. Additionally, ensure that the camera frame appears nearly coincident with the `/map` frame.

> **Note**: It is acceptable for the `/odom` frame to jump around significantly with respect to the `/map` frame. For the purposes of this test, only the transform between the `/map` and camera frames is relevant. For additional information about the design intent behind these frames, please see [REP-0105](https://www.ros.org/reps/rep-0105.html).

<div align="center"><img src="../resources/validation_test2_rviz.png" width="600px"/></div>

6. On the standalone computer and within the `tf2_echo` terminal, note the latest transform reported between `/map` and the appropriate camera frame.
7. On the standalone computer and within the `tf2_echo` terminal, note the latest transform reported between `/map` and the appropriate camera frame.
<div align="center"><img src="../resources/validation_test2_tf2.png" width="600px"/></div>

1. Validate Orientation:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,15 +139,25 @@ struct VisualSlamNode::VisualSlamImpl
std::array<float, kMaxNumCameraParameters> left_intrinsics;
std::array<float, kMaxNumCameraParameters> right_intrinsics;
// Transformation converting from
// ROS Frame (x-forward, y-left, z-up) to
// ROS Frame (x-forward, y-left, z-up) to
// cuVSLAM Frame (x-right, y-up, z-backward)
const tf2::Transform cuvslam_pose_base_link;

// Transformation converting from
// cuVSLAM Frame (x-right, y-up, z-backward) to
// ROS Frame (x-forward, y-left, z-up)
// ROS Frame (x-forward, y-left, z-up)
const tf2::Transform base_link_pose_cuvslam;

// Transformation converting from
// Optical Frame (x-right, y-down, z-forward) to
// cuVSLAM Frame (x-right, y-up, z-backward)
const tf2::Transform cuvslam_pose_optical;

// Transformation converting from
// cuVSLAM Frame (x-right, y-up, z-backward) to
// Optical Frame (x-right, y-down, z-forward)
const tf2::Transform optical_pose_cuvslam;

// Start pose of visual odometry
tf2::Transform start_odom_pose = tf2::Transform::getIdentity();

Expand Down
30 changes: 20 additions & 10 deletions isaac_ros_visual_slam/src/impl/visual_slam_impl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -92,16 +92,27 @@ VisualSlamNode::VisualSlamImpl::VisualSlamImpl(VisualSlamNode & node)
: logger_name(std::string(node.get_logger().get_name())),
node_clock(node.get_clock()),
// Set cuvslam_pose_base_link for converting ROS coordinate to cuVSLAM coordinate
// ROS -> cuVSLAM
// x -> -z
// y -> -x
// z -> y
// ROS -> cuVSLAM
// x -> -z
// y -> -x
// z -> y
cuvslam_pose_base_link(tf2::Matrix3x3(
0, -1, 0,
0, 0, 1,
-1, 0, 0
)),
base_link_pose_cuvslam(cuvslam_pose_base_link.inverse()),
// Set cuvslam_pose_optical for converting Optical coordinate to cuVSLAM coordinate
// Optical -> cuVSLAM
// x -> x
// y -> -y
// z -> -z
cuvslam_pose_optical(tf2::Matrix3x3(
1, 0, 0,
0, -1, 0,
0, 0, -1
)),
optical_pose_cuvslam(cuvslam_pose_optical.inverse()),
tf_buffer(std::make_unique<tf2_ros::Buffer>(node_clock)),
tf_listener(std::make_shared<tf2_ros::TransformListener>(*tf_buffer)),
tf_publisher(std::make_unique<tf2_ros::TransformBroadcaster>(&node)),
Expand Down Expand Up @@ -225,11 +236,9 @@ void VisualSlamNode::VisualSlamImpl::Init(
tf2::Transform left_rect_pose_right_rect(left_pose_right);
tf2::Transform left_rect_pose_left_unrect(left_r_mat);
tf2::Transform right_rect_pose_right_unrect(right_r_mat);
left_pose_right =
left_rect_pose_left_unrect.inverse() * left_rect_pose_right_rect *
right_rect_pose_right_unrect;
// Invert transform for cuVSLAM for unrectified case.
left_pose_right.setBasis(left_pose_right.getBasis().inverse());
left_pose_right = ChangeBasis(
cuvslam_pose_optical, (left_rect_pose_left_unrect.inverse() * left_rect_pose_right_rect *
right_rect_pose_right_unrect));
}
}

Expand Down Expand Up @@ -259,7 +268,8 @@ void VisualSlamNode::VisualSlamImpl::Init(

SetcuVSLAMCameraPose(
cuvslam_cameras[LEFT_CAMERA_IDX], TocuVSLAMPose(tf2::Transform::getIdentity()));
SetcuVSLAMCameraPose(cuvslam_cameras[RIGHT_CAMERA_IDX], TocuVSLAMPose(left_pose_right));
SetcuVSLAMCameraPose(
cuvslam_cameras[RIGHT_CAMERA_IDX], TocuVSLAMPose(left_pose_right));

cuvslam_cameras[LEFT_CAMERA_IDX].parameters = left_intrinsics.data();
cuvslam_cameras[RIGHT_CAMERA_IDX].parameters = right_intrinsics.data();
Expand Down