Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Empty PointCloud for Reconstruction #2

Closed
flying-dutch-man opened this issue Jul 10, 2013 · 5 comments
Closed

Empty PointCloud for Reconstruction #2

flying-dutch-man opened this issue Jul 10, 2013 · 5 comments

Comments

@flying-dutch-man
Copy link

I am not quite sure if reconstruction is actually taking place, because when u try to save the pcd file it tells [pcl::io::savePLYFile] Input point cloud has no data!.

I am using PCL 1.7 version. Any issues with pcl library 1.7 ??

@sdmiller
Copy link
Owner

This builds against the PCL trunk, so you should probably make sure you're on the latest version (1.7 will officially be released shortly). FrustumCulling in particular had some bugfixes about 3 months ago. You'll know that reconstruction is taking place, because it will take some time (~30-100ms depending on the machine and volume size) to call addCloud ();

Things to check
A) The input clouds are organized (has a width and height, cloud.is_dense = false)
B) The input clouds are in the sensor's reference frame (sensor is at the origin, facing +Z)
C) The volume is big enough, and the camera poses you are supplying lie inside the volume.

If all of those are still true and it fails, you should send me a tarball with the point clouds you're trying to run it on, and I'll see what I can do.

@sdmiller
Copy link
Owner

@evolutionsdisaster Ping? :)

@flying-dutch-man
Copy link
Author

Firstly we need to check if the metric system between the voxel cube which we are setting up is the same as that of the point cloud, for example some datasets (tum's dataset) have their depth maps scaled to a factor of 5000. So you need to divide their depth map values by 1000 and not by actual scale factor ( I dont know whats the issue with it). The reason is if you dont factor them out by 5000 the point cloud is very big and the cube is within the pointcloud, so tsdf values are zero. If you are factoring them by real value then the point cloud is so small that it lies inside a voxel within the cube so you again get tsdf values and weights as zeros. ( In short the metric system between the cube (which is in mm) n the point cloud constructed from a depth map should be the same)
Secondly the minWeight in reconstruction should be very small of the order of 1E-3 to work.

@sdmiller
Copy link
Owner

You can always play with setVolumeSize to scale the TSDF to the size of your point clouds -- this is better than arbitrarily scaling the input data. The cube is in meters by default -- this is reflected in min_sensor_dist_ (usually around 70 cm for a kinect), max_sensor_dist_ (3-5m for good data), and the truncation limit (3cm = about the noise level of the kinect). Details about the sensor are also reflected by the camera intrinsics -- 525 focal length for the Kinect by default. If the TUM data doesn't have the same properties, you should set the parameters accordingly.

I'm afraid I don't follow the logic in minWeight being so small. Each observation, by default, adds a weight of 1 in the running average. Assuming you mean the MarchingCubes weight, a minWeight of 1E-3 is equivalent to a minWeight of .9 or 0, and just means "Only reconstruct voxels which have been seen at least once." If you need this in order for the reconstruction to work, I strongly suspect you're not really integrating multiple views, and something else is off.

I'd take a look at what assumptions you're making about the data, and make sure the parameters of the TSDF (size, resolution, truncation distance, min/max sensor distance) are appropriate. I'd also make absolutely sure that the clouds you're giving it are in the sensor's frame of reference, and the transforms you give it are the camera poses, such that T*C brings the cloud into the world frame.

@sdmiller
Copy link
Owner

Closing, as I haven't heard for over a year and assume the issue is fixed. Anyone with similar issues may note that the current ./integrate executable includes some helpful parameters for scaling to millimeters, scaling poses, etc. The ./get_intrinsics executable can also help you figure out if the default focal parameters are reasonable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants