Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Sample Points from OBJ and index point to face link #1571

Open
justinhchae opened this issue Jun 22, 2023 · 1 comment
Open

Enable Sample Points from OBJ and index point to face link #1571

justinhchae opened this issue Jun 22, 2023 · 1 comment

Comments

@justinhchae
Copy link

馃殌 Feature

Enable users to sample points from any data structure that defines a mesh according to the obj format. In addition, provide the following capabilities:

  1. for each sampled point, provide a mappers index to the point's origin face
  2. allow a user to force the sampler to return at least one point per face, for faces with 0 area, this is effectively the bary center of the face verts.
  3. allow a user to specify a relative density the desired point cloud instead of a specific number and return point cloud samples proportionate to face area
  4. If the input obj mesh has multiple texture files, ensure that the resulting textures in the point cloud reflect the appropriate textures.

A working fork that implements this feature is available for evaluation at esri/pytorch3d.

Background about this feature and related issues are further described at geoai/pytorch3d.

Key changes to the api include (referenced at https://medium.com/geoai/geoai-in-3d-with-pytorch3d-ec7a88add06):

  1. multitexture-obj-io-support: This branch establishes multitexture support for obj meshes by modifying pytorch3d.io.obj_io. Currently, PyTorch3D does not fully support reading and writing obj meshes with multiple texture files. In such cases, only the first of many textures are read into memory. For meshes that contain varying textures and multiple files, the results of texture sampling may lead to undesirable results, i.e., vegetation textures sampled onto building faces and so on. Specifically, we created pytorch3d.io.obj_io.subset_obj and modified pytorch3d.io.obj_io.save_obj to implement these features. In addition, a new utility function is provided in the form of pytorch3d.utils.obj_utils that consolidates multiple helper functions used to support obj sub-setting and validation operations. This branch and following branches are updated to support recent changes to the API that now support I/O for face vert normals as of PyTorch release 0.7.4. Addresses multiple existing issues to include #694, #1017, and #1277.

  2. multitexture-obj-point-sampler: This branch includes all changes in multitexture-obj-io-support and adds support for sampling points directly from meshes in obj format. This branch introduces a new function, pytorch3d.ops.sample_points_from_obj, that leverages core functions that already exist in pytorch3d.ops.sample_points_from_meshes. Sampling points directly from an obj that has many large texture files can be advantageous over a Meshes data structure since the Meshes structure concatenates textures in memory. There are three key features to highlight. First, this branch allows both sample_points_from_meshes and sample_points_from_obj to return a mappers tensor that links each sampled point to the face it was sampled from. Second, this branch allows one to force sample_points_from_obj to return at least one point from all faces, regardless of face area with (sample_all_faces=True). Third, this branch allows a user to specify the density of the desired point cloud with min_sampling_factor rather than a fixed number of points.

Motivation

Learning models of buildings and other structures of interest in objs that represent cityscapes, for example, is an interesting problem. In some cases, we can render the mesh directly and apply related techniques to classify or segment the mesh. In other cases, it can be advantageous to sample a point cloud from the mesh and apply point classifications back to the mesh. These are just two of many ways to approach meshes. However, in any case, different algorithms provide varying pros and cons. Importantly, the flexibility to treat mesh problems as either meshes or point clouds can be helpful to any audience.

As a result, if one wants to treat a mesh segmentation problem as a point cloud classification problem, then the following are important to consider:

  • the point cloud should represent at least every face in the input mesh with at least one point
  • faces should be sampled with a number of points proportional to face area; this is especially important for faces that may be relatively large and require more points to provide enough random points that simulates a real point cloud
  • the points and the meshes should represent the original vertex geometries as close as possible (without rounding values)
  • enable a user to define the relative density of the desired point cloud rather than specifying a number of points to sample; this is important because it is not always clear to a user that a fixed value for number of points to sample w.r.t a given point cloud will produce a point cloud that sufficiently dense enough to learn features
  • When working with meshes that reference more than one large texture file, the textures are concatenated into a single Meshes structure; for a modest sized obj, this can lead to OOM issues for even large CPUs (> 128 GB) and are not feasible with most commercial GPUs. As a result, it can be exceedingly difficult to sample such meshes; providing way to sample meshes that are defined by a series of arrays (as they are with obj faces, verts, and aux) provides a way for users to perform this type of sampling.

Pitch

Mesh segmentation and classification are incredibly interesting problems and the community can benefit from additional tools that can serve multiple areas of research. The ability to flexibly treat meshes as point clouds and transfer information from one to the other is one such important tool. A working implementation of this idea demonstrates how it can work.

@ardianumam
Copy link

Very useful feature. Will vote this up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants