Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sampling with Blensor #6

Closed
chenzhaiyu opened this issue Mar 28, 2021 · 4 comments
Closed

Sampling with Blensor #6

chenzhaiyu opened this issue Mar 28, 2021 · 4 comments

Comments

@chenzhaiyu
Copy link

Hi @ErlerPhilipp,

In the paper you stated

As a pre-processing step, we center all meshes at the origin and scale them uniformly to fit within the unit cube, ..., For each scan, we place the scanner at a random location on a sphere centered at the origin, ..., The scanner is oriented to point at a location with small random offset from the origin, ..., and rotated randomly around the view direction.

I wonder how this corresponds to the code

scanner.location = Vector([0.0, 0.0, 0.0])

obj_object.location = Vector(obj_locations[i])
obj_object.rotation_quaternion = Quaternion(obj_rotations[i])
do_scan(scanner, evd_file)

which to me seems the scanner is at the center while the mesh is moving and rotating around.

Any hint? Thanks in advance!

@ErlerPhilipp
Copy link
Owner

@chenzhaiyu Yes, you are right that this doesn't match. I wrote a simplification in the paper but the result is the same: an object scanned from multiple directions. The transformations applied to the object are later reverted when assembling the point cloud from the individual scans.

I tried to do it like written in the paper but even after a few days, I still couldn't figure out which transformation(s) Blensor applies to the points. It seems to be dependent on the scanner, but not simply its inverse transformation. So I went with a working simpler solution, which was to move the object instead of the camera.

If you can find a fix for this, please submit a pull-request.

@chenzhaiyu
Copy link
Author

Thanks for your quick reply! This workaround somehow hinders what I'd like to do...

I wanted to create the sampling with certain perspective(s) excluded; imagine the MVS point cloud from a building that doesn't have the bottom view. With the way in the paper it seems easy to simulate this scenario, just by restricting the location of the scanner (e.g. at the upper part of the sphere). With the current workaround where the object is both moving and rotating randomly, do you still see an easy way to do that?

BTW, I'm also curious how can P2S handle this kind of incomplete point clouds (whose GT mesh should be solid). Specifically, do you think the global subsample is capable of encoding the coarse shape info despite the missing points?

@ErlerPhilipp
Copy link
Owner

ErlerPhilipp commented Mar 28, 2021

I've seen a setting somewhere in Blensor that automatically centers the created pointcloud in the origin. This could be the reason for the strange transformation. Disabling this is surely worth a try and would be the cleaner solution.

Otherwise, you can modify the object movement and restrict the random rotation to some axes and ranges. You would need to point the roof towards the camera and then apply max 90 degrees of rotation around the side axis.

The global subsample is there exactly for encoding the boundary of a solid. It's not perfect, of course. I did some overfitting on an incomplete point cloud with reconstruction in a full but low-res grid and was able to get all holes right where the (stupid and simple) heuristic with sign propagation would lead to large errors. So, yes, the network can learn to complete such point clouds. You may encounter under-sampling, though.

@chenzhaiyu
Copy link
Author

Otherwise, you can modify the object movement and restrict the random rotation to some axes and ranges. You would need to point the roof towards the camera and then apply max 90 degrees of rotation around the side axis.

This worked well for me.

I did some overfitting on an incomplete point cloud with reconstruction in a full but low-res grid and was able to get all holes right where the (stupid and simple) heuristic with sign propagation would lead to large errors. So, yes, the network can learn to complete such point clouds.

I've also trained P2S for a while on a building dataset completely without the bottom points. Indeed it's still working very well to reconstruct solid models, though it might be simply remembering where is the bottom and just appending it.

I'll dig deeper on the cleaner alternative with Blensor when having the time. Thanks again! Closing for now :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants