The main goal of this research is to accurately complete occluded parts of existing 3D models in 3D scans and separate them from the environment to make them interactive.
Completing 3D models of partial 3D scanned objects, both using existing geometry completion networks and finding new texture completion thingy. The main contribution will be using the existing partial texture data to complete the rest of the mesh.
Use existing 3D geometry generators, since they all rely on either SDF's or points we will evaluate them separately and need to convert the meshes.
- AutoSDF 2022 (VQ-VAE & transformer)
- Marching cubes
- Implicit Feature Networks for Texture Completion from Partial 3D Data
IF-NET completing partial scans of humans, both geometry and texture (2020)
Converting a scanned environment into a collection of environment and object models that are filled in from all sides and occlusions
Combine the object completion with the segmentation model and room cleanup model to create a full room with moveable furniture
3D Semantic Scene Completion: A Survey
- Replica-Dataset 500
- Matterport3d
- ScanNet
- 3D-front
- Hypersim
- [scanNet++][https://cy94.github.io/scannetpp]
- CIRCLE: Convolutional Implicit Reconstruction and Completion for Large-scale Indoor Scene
CIRCLE is a framework for large-scale scene completion and geometric refinement based on local implicit signed distance function.
-
SG-NN: Sparse Generative Neural Networks for Self-Supervised Scene Completion of RGB-D Scans 2020
-
SPSG: Self-Supervised Photometric Scene Generation from RGB-D Scans 2021
completes rgb-d scans by generating new camera views in obscured areas. Uses per voxel color information and a TSDF
-
RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction 2021
-
Point Scene Understanding via Disentangled Instance Mesh Reconstruction 2022