Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Select part of pointcloud for meshing #217

Closed
pr0gr8mm3r opened this issue Aug 28, 2018 · 11 comments
Closed

Select part of pointcloud for meshing #217

pr0gr8mm3r opened this issue Aug 28, 2018 · 11 comments

Comments

@pr0gr8mm3r
Copy link

I'm wondering if there is a way to only use part of the generated pointcloud for meshing. I had to reduce the number of maximum points used for meshing to 2000000 points, as I don't have enough RAM for more (#195). The problem is that my pointcloud includes a lot of the room I scanned my object in. As the object has a higher density than the surroundings, it's quality is reduced a lot.
Solutions would be to delete part of the cloud or prioritize parts that have a high density. Does anyone know how to do this? Thanks in advance.

@fabiencastan
Copy link
Member

Yes that's of course an important feature, but it is not yet implemented.

@renanmgs
Copy link

Any updates on this? this is really important, is like a vital update

@fabiencastan
Copy link
Member

You should try the new release: https://github.com/alicevision/meshroom/releases/tag/v2019.1.0
You cannot directly edit the bounding box, but you can adjust the parameter Min Observations Angle For SfM Space Estimation on the Meshing node which should work fine for your use case.

@Khojanator
Copy link

@fabiencastan I'm currently testing Meshroom, and my experience with the software has been really good! I'm interested in this feature (bounding box / reconstruction region) as well and would love to know if there's some way I can help develop it.

@natowi
Copy link
Member

natowi commented Dec 14, 2019

@Khojanator Image Masking could be used to filter features before reconstruction #708 but a generic background removal tool for bulk mask generation #713 is still missing.

Of course being able to select a part of the sfm pointcloud for reconstruction could still be useful in some cases.

@fabiencastan
Copy link
Member

@Khojanator Yes, of course. We can setup a confcall to discuss how to implement it.

@Khojanator
Copy link

@natowi @fabiencastan Thanks for getting back to me. For some reason, I didn't get notified even though there was an at-mention...
Anyways, Image masking isn't the best option for me, since I'm trying to do a full-body scan using a rig, something similar to this [https://web.twindom.com/twinstant-mobile-full-body-3d-scanner/]. So, a person stands in the center, and multiple images are taken from each direction. What works really well here is having distinct features in the background, which leads to better SfM results, and I feel that image masking would lead to a worse result. That being said, I'm also a novice in this area, so please correct me here.
That being said, once I can consistently get the point clouds of the scans produced in the same location/orientation, having a bounding box/reconstruction region will allow me to consistently get the region for MVS where the person is. Thoughts?
Let's find a time to chat further and setup a confcall!

@fabiencastan
Copy link
Member

With the image masking you can still decide to use all the feature points without masking for the SfM and then only use the mask in the depth maps.
You can contact me at fabien.castan[at]mikrosimage.eu to setup a call in January.

@NexTechAR-Scott
Copy link

I agree bounding box is huge need.

It can drastically reduce processing time and can eliminate the need for cleaning the resulting mesh.

For me the two best tools are Reality Captured and Meshroom.

RC is stupid fast but lacks point cloud editing capabilities which is somewhat mitigated by having bounding box control.

Meshroom is full of tweak options but the glaring miss for me is point cloud editing and bounding box.

Either one (or both) would make Meshroom the most robust CLI solution out there.

I’ve been chasing an interupt in the pipe to bring SFM.abc output into a 3rd party solution like blender to clean up the point cloud then bring that back to Meshroom for autocomplete in my pipe.

The blocker is Meshroom won’t process that edited alembic, does not throw an error, just won’t process it.

I can only assume it’s something about the Blender alembic that is not structured the way Meshroom needs it to be.

If anyone has some advice on the abc format that Meshroom expects I’d be grateful to hear it.

@fabiencastan
Copy link
Member

I can only assume it’s something about the Blender alembic that is not structured the way Meshroom needs it to be.

It is not possible because we maintain visibility information in the ABC file (which is a notion specific to photogrammetry). It would be possible to create a node to re-import an externally modified point cloud and remap the 3d points visibility into it (as we do with meshes, when we allow to retexture an externally modified mesh).

@Khojanator
Copy link

@fabiencastan sorry for the message here. I tried reaching out to you over fabien.castan[at]mikrosimage.eu, but got an address not found error. Is there a better way for us to get in touch? Feel free to send me an email at ahsan.khoja[at]gmail.com. I'd love to get this project going! Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants