Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple octomap views generated and octomap flipped #20

Open
aadityasaraiya opened this issue Jul 6, 2018 · 10 comments
Open

Multiple octomap views generated and octomap flipped #20

aadityasaraiya opened this issue Jul 6, 2018 · 10 comments

Comments

@aadityasaraiya
Copy link
Contributor

aadityasaraiya commented Jul 6, 2018

Hi!

So after solving this issue, I was able to run all the launch files/ nodes and started to analyze the Kinect fusion process. However, I am not able to understand why multiple octomaps in different views are being generated, and one view has an octomap which is flipped.

I am adding the different steps of the process and the views in Gazebo and RViz so you can get a better visualisation of the problem.


After launching Gazebo and moveit_planning_execution.launch

Note: The Octomap is straight and properly generated wrt to the object placed in Gazebo. The first Octomap isn't generated by the YAK package and was created separately to check if the Kinect camera is working in simulation properly.

gazebo_rviz_init


After roslaunch yak launch_gazebo_robot.launch

Note: Multiple octomaps are generated in all 3 directions. Even though there is just one sphere in the view as shown in the earlier picture. Could this be because the tracking isn't working?

multi_octomaps


After roslaunch nbv_planner octomap_mapping.launch

initialise_volume_markers


Afterrosrun nbv_planner exploration_controller_node

Note-

  1. The raycasting generates all the rays and hit rays near to the robot base itself. Is this how its supposed to happen?
  2. The robot doesn't move as its supposed to when running the exploration_controller_node
  3. I receive a warning message which is as follows:
[ INFO] [1530883513.318211330, 866.408000000]: Unable to solve the planning problem
[ERROR] [1530883513.318379591, 866.408000000]: Couldn't compute a free-space trajectory
[ INFO] [1530883513.318441594, 866.408000000]: Pose index: 28
[ INFO] [1530883513.318512142, 866.408000000]: Trying next best pose: position: 
  x: -0.218574
  y: 0.793138
  z: 0.303502
orientation: 
  x: 0.77676
  y: 0.211575
  z: 0.155896
  w: 0.572343

[ERROR] [1530883513.318788475, 866.408000000]: Found empty JointState message
[ WARN] [1530883513.318873192, 866.408000000]: It looks like the planning volume was not specified.
  1. On the MoveIt! terminal i get the following error message [ WARN] [1530883644.819044064, 925.199000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero. Setting to 1.0.

raycasting


I apologise for the long message. Just wanted to describe stuff in a better way as I am not able to pinpoint what exactly is causing this error.

Thanks in advance,
Aaditya Saraiya

@Levi-Armstrong

@AustinDeric
Copy link
Owner

Thanks for this write-up! I will review this and get back to you.

@aadityasaraiya
Copy link
Contributor Author

@AustinDeric Thanks a lot. Looking forward to your comments on the same.

@schornakj
Copy link
Collaborator

schornakj commented Jul 6, 2018

Are the redundant octomaps all created at the same time, or sequentially? If the pose of the TSDF reconstruction volume changes after mapping starts then it behaves as if objects in the scene have moved and might start mapping new voxels as occupied. If they're being drawn simultaneously then it's very mysterious: haven't seen that behavior before!

(re: 1) The origins of the rays are supposed to be placed in a spherical pattern around the center of the volume. You should be able to control where they're created by moving the volume, though based on your picture it looks like they might not be related to the pose of the volume correctly. (edit: or the center of the pattern is at a fixed offset from the corner of the volume closest to the origin)

(re: 2, 3, and possible 4) The motion planning problem is kind of weirdly constrained, as a workaround to prevent the robot from pointing the camera away from the volume to be reconstructed while moving between poses. It creates a point some distance away from the front of the camera (about 50cm, offhand), then tries to keep that point within the boundary of the volume. Since the motion planning uses a RRT strategy, at best it finds a solution after a fairly long time, and at worst it completely fails to plan the motion and throws errors like the first few shown, though I'm not 100% sure what's going on with the last two. It should iterate through candidate poses in order of which ones expose the most unknown voxels in the volume until it finds a motion it can accomplish.

@aadityasaraiya
Copy link
Contributor Author

Hi @schornakj. Thanks for your response.

Are the redundant octomaps all created at the same time, or sequentially? 

Yes, there is like a 2-3 seconds pause before these are created. I won't say they all are created at the same time. Why would the TSDF reconstruction volume move though? I mean, I was not moving the interactive marker in RViz manually during this example.

The origins of the rays are supposed to be placed in a spherical pattern around the center of the volume. 

So I would conclude that the pattern formed by the rays is similar to the geometry of the object which is placed in front of it in all cases? I have gone through the Kinect fusion paper and the UW CSE Lecture slides. Any other suggestion which would help me get a deeper intuition into this process?

Since the motion planning uses a RRT strategy, at best it finds a solution after a fairly long time, and at worst it completely fails to plan the motion and throws errors like the first few shown, though I'm not 100% sure what's going on with the last two

This confirms that the behavior which I was seeing wasn't something abnormal. I was also having cases where the motion could not be planned. After a while it used to take up candidate poses and in some cases it used to state that The planning problem has been solved or solution has been found. However, there were constraints which were preventing it to move to that location. Will update after checking what happens when those candiate poses are being tried.

[ERROR] [1530883513.318788475, 866.408000000]: Found empty JointState message
  1. After searching on ROS Answers, this error doesn't seem to have any major importance. So as of now, I am going to ignore it.

Thanks again!

@aadityasaraiya
Copy link
Contributor Author

EDIT: On further observation, just a few more warning messages.

[ WARN] [1530986762.138799683, 533.694000000]: Unable to transform object from frame 'candidate0' to planning frame '/world' (Lookup would require extrapolation into the past.  Requested time 523.539000000 but the earliest data is at time 523.680000000, when looking up transform from frame [candidate0] to frame [world])

I get this above error for all 64 candidates. Seems similar to my previous issue. But adding a try-catch block as before does not solve this error as it did before.

This is one more warning.

[ WARN] [1530986530.323626313, 426.508000000]: Failed to fetch current robot state.
[ INFO] [1530986530.324177460, 426.508000000]: Combined planning and execution request received for MoveGroup action. Forwarding to planning and execution pipeline.
[ WARN] [1530986530.324308021, 426.508000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ INFO] [1530986530.324483190, 426.508000000]: Planning attempt 1 of at most 1
[ WARN] [1530986530.325945894, 426.508000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ INFO] [1530986530.326051436, 426.508000000]: Path constraints not satisfied for start state...
[ WARN] [1530986530.326549565, 426.508000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ INFO] [1530986530.326762763, 426.508000000]: Position constraint violated on link 'camera_depth_optical_frame'. Desired: -0.300000, -0.300000, 0.800000, current: 0.816647, 0.204373, 0.491472
[ INFO] [1530986530.326923292, 426.508000000]: Differences -1.11665 -0.504373 0.308528
[ INFO] [1530986530.327083872, 426.508000000]: Planning to path constraints...
[ WARN] [1530986530.327393233, 426.508000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ WARN] [1530986530.327645776, 426.508000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
Debug:   Starting goal sampling thread
[ INFO] [1530986530.328125348, 426.508000000]: Planner configuration 'manipulator' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed.
Debug:   The value of parameter 'longest_valid_segment_fraction' is now: '0.0050000000000000001'
Debug:   RRTConnect: Planner range detected to be 7.539822
Info:    RRTConnect: Starting planning with 1 states already in datastructure
Debug:   RRTConnect: Waiting for goal region samples ...
Debug:   Beginning sampling thread computation
Debug:   RRTConnect: Waited 0.431969 seconds for the first goal sample.
Info:    RRTConnect: Created 4 states (2 start + 2 goal)
Info:    Solution found in 0.475992 seconds
Debug:   Attempting to stop goal sampling thread...
Debug:   Stopped goal sampling thread after 2 sampling attempts
Info:    SimpleSetup: Path simplification took 0.026644 seconds and changed from 3 to 2 states
[ INFO] [1530986530.925958530, 426.765000000]: Planned to path constraints. Resuming original planning request.
[ WARN] [1530986530.926477926, 426.765000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ WARN] [1530986530.926728899, 426.765000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ WARN] [1530986530.927186930, 426.765000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
Debug:   Starting goal sampling thread
Debug:   Waiting for space information to be set up before the sampling thread can begin computation...
[ INFO] [1530986530.927598462, 426.765000000]: Planner configuration 'manipulator' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed.
Debug:   The value of parameter 'longest_valid_segment_fraction' is now: '0.0050000000000000001'
Debug:   RRTConnect: Planner range detected to be 3.778261
[ WARN] [1530986530.928493559, 426.765000000]: The weight on position constraint for link 'camera_depth_optical_frame' is near zero.  Setting to 1.0.
[ INFO] [1530986530.928766280, 426.765000000]: Allocating specialized state sampler for state space
Info:    RRTConnect: Starting planning with 1 states already in datastructure
Debug:   RRTConnect: Waiting for goal region samples ...
Debug:   Beginning sampling thread computation
[ INFO] [1530986559.116059745, 439.999000000]: Position constraint violated on link 'camera_depth_optical_frame'. Desired: -0.300000, -0.300000, 0.800000, current: -0.187025, 0.287808, 0.549807
[ INFO] [1530986559.116579113, 439.999000000]: Differences -0.112975 -0.587808 0.250193
[ INFO] [1530986559.162846369, 440.015000000]: Position constraint violated on link 'camera_depth_optical_frame'. Desired: -0.300000, -0.300000, 0.800000, current: -0.187081, 0.287793, 0.549915
[ INFO] [1530986559.167912790, 440.017000000]: Differences -0.112919 -0.587793 0.250085
Error:   RRTConnect: Unable to sample any valid states for goal tree
         at line 215 in /tmp/binarydeb/ros-kinetic-ompl-1.2.1/src/ompl/geometric/planners/rrt/src/RRTConnect.cpp

@schornakj
Copy link
Collaborator

There isn't any feedback from the TSDF reconstruction to the placement of the candidate poses and the casting of the rays. As a simplification I had assumed that if the camera got a chance to look at all the different parts of the object, then the TSDF surface would be reasonably complete. Currently poses are selected by randomizing pitch and yaw relative to some nominal center of the object to be explored, and then randomly varying the camera distance within a range. A constant number of rays are cast from each pose, and the poses are ranked by how many rays hit "unknown" voxels in the octomap. Basically it's trying to generate a variety of poses that are likely to be reachable and likely to expose new regions of the surface, but without having very much specific knowledge of the characteristics of the surface

You could try removing some of the constraints on the motion planner. The constraint that keeps the camera pointed at the object was added to solve a problem with using Iterative Closest Point to find the pose of a real camera from the current depth image and the surface in the TSDF volume, which is that ICP gets lost if it can't find any part of a previously-seen surface in the current camera image. If you're providing it with current poses from tf then it shouldn't be as much of an issue. ICP might give weird results for a smooth simulated surface, but it's not something I've tried personally.

I'll get back to you on the other topics shortly.

@aadityasaraiya
Copy link
Contributor Author

Thanks a lot, @schornakj. This makes a lot of sense, especially recognizing that the TSDF reconstruction and the placement of poses are decoupled.

Currently poses are selected by randomizing pitch and yaw relative to some nominal center of the object to be explored, and then randomly varying the camera distance within a range

  • I had read the yaw part while reading the code, and this clears my confusion about what exactly that part was trying to achieve.

You could try removing some of the constraints on the motion planner.

  • Will give this a try and will update you with results.

If you're providing it with current poses from tf then it shouldn't be as much of an issue

  • I agree that relaxing these constraints could make sense for a simulated Gazebo object. However, it (relaxing the constraints) wouldn't really work with real objects.

ICP might give weird results for a smooth simulated surface

  • Why exactly would ICP give weird results? Is this again because it being a simulated camera and we already have the current tf? Shouldn't it actually give smoother results compared to a real camera?

@schornakj
Copy link
Collaborator

Why exactly would ICP give weird results? Is this again because it being a simulated camera and we already have the current tf? Shouldn't it actually give smoother results compared to a real camera?

I should have used a clearer word than "weird". ICP is responsible for calculating the pose of the camera relative to the observed surface, and it does this by trying to align the current depth image with previously observed ones. For this to work well, the surface should have variations like corners, edges, curves, and rough textures. In your simulated world the objects are a perfectly smooth sphere and plane. I was thinking about what would happen if the camera could only see part of the sphere or the plane. I would expect that ICP would have difficulty finding the correct camera pose, since the problem of fitting a section of a plane to a flat surface or a piece of a sphere to a sphere is very ambiguous.

@aadityasaraiya
Copy link
Contributor Author

In your simulated world, the objects are a perfectly smooth sphere and plane. I was thinking about what would happen if the camera could only see part of the sphere or the plane.

That's true, even I had similar concerns while I was starting out with Gazebo. This paper suggests the use of 3-D CAD models as a reference to align real-time data using ICP with a similar task of 3-D reconstruction. However, the behavior of ICP with Gazebo models is definitely questionable.

@aadityasaraiya
Copy link
Contributor Author

While I was analysing further, I realised a few things.


  1. Multiple octomaps

So regarding the problem with multiple octomaps, I observed that as I shift the interactive volume marker to the left and right, only the fake octomap moves in the direction where the volume marker box is being moved which leads to the following artifact. The actual octomap does not move at all. Any clues on why this could be happening?

multiple_octos_reason


  1. Regarding TSDF volume

I switched the sphere model with a dumpster just for the sake of having a few more edges, corners and color differences (don't know if its exactly helpful though). This is how the raycasting looks them being pointed towards the point cloud published on the my_unknown_point_cloud topic. The reconstruction will need the exploration_node to work properly.

dumpster_robot

candidate_poses_with_tf


Note: I created a new issue for the lack of robot motion as there seems to be some weird behavior regarding TF.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants