3D Machine Learning
In recent years, tremendous amount of progress is being made in the field of 3D Machine Learning, which is an interdisciplinary field that fuses computer vision, computer graphics and machine learning. This repo is derived from my study notes and will be used as a place for triaging new research papers.
I'll use the following icons to differentiate 3D representations:
📷Multi-view Images 👾Volumetric 🎲Point Cloud 💎Polygonal Mesh 💊Primitive-based
To find related papers and their relationships, check out Connected Papers, which provides a neat way to visualize the academic field in a graph representation.
To contribute to this Repo, you may add content through pull requests or open an issue to let me know.
We have also created a Slack workplace for people around the globe to ask questions, share knowledge and facilitate collaborations. Together, I'm sure we can advance this field as a collaborative effort. Join the community with this link.
Table of Contents
- 3D Pose Estimation
- Single Object Classification
- Multiple Objects Detection
- Scene/Object Semantic Segmentation
- 3D Geometry Synthesis/Reconstruction
- Texture/Material Analysis and Synthesis
- Style Learning and Transfer
- Scene Synthesis/Reconstruction
- Scene Understanding
To see a survey of RGBD datasets, check out Michael Firman's collection as well as the associated paper, RGBD Datasets: Past, Present and Future. Point Cloud Library also has a good dataset catalogue.
Dataset for IKEA 3D models and aligned images (2013) [Link]
759 images and 219 models including Sketchup (skp) and Wavefront (obj) files, good for pose estimation.
Open Surfaces: A Richly Annotated Catalog of Surface Appearance (SIGGRAPH 2013) [Link]
OpenSurfaces is a large database of annotated surfaces created from real-world consumer photographs. Our annotation framework draws on crowdsourcing to segment surfaces from photos, and then annotate them with rich surface properties, including material, texture and contextual information.
PASCAL3D+ (2014) [Link]
12 categories, on average 3k+ objects per category, for 3D object detection and pose estimation.
ModelNet (2015) [Link]
127915 3D CAD models from 662 categories
ModelNet10: 4899 models from 10 categories
ModelNet40: 12311 models from 40 categories, all are uniformly orientated
A Large Dataset of Object Scans (2016) [Link]
10K scans in RGBD + reconstructed 3D models in .PLY format.
ObjectNet3D: A Large Scale Database for 3D Object Recognition (2016) [Link]
100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes.
Tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval
Thingi10K: A Dataset of 10,000 3D-Printing Models (2016) [Link]
10,000 models from featured “things” on thingiverse.com, suitable for testing 3D printing techniques such as structural analysis , shape optimization, or solid geometry operations.
ABC: A Big CAD Model Dataset For Geometric Deep Learning [Link][Paper]
This work introduce a dataset for geometric deep learning consisting of over 1 million individual (and high quality) geometric models, each associated with accurate ground truth information on the decomposition into patches, explicit sharp feature annotations, and analytic differential properties.
This work introduce ScanObjectNN, a new real-world point cloud object dataset based on scanned indoor scene data. The comprehensive benchmark in this work shows that this dataset poses great challenges to existing point cloud classification techniques as objects from real-world scans are often cluttered with background and/or are partial due to occlusions. Three key open problems for point cloud object classification are identified, and a new point cloud classification neural network that achieves state-of-the-art performance on classifying objects with cluttered background is proposed.
VOCASET: Speech-4D Head Scan Dataset (2019( [Link][Paper]
VOCASET, is a 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio. The dataset has 12 subjects and 480 sequences of about 3-4 seconds each with sentences chosen from an array of standard protocols that maximize phonetic diversity.
3D-FUTURE: 3D FUrniture shape with TextURE (2020( [Link]
VOCASET, contains 20,000+ clean and realistic synthetic scenes in 5,000+ diverse rooms, which include 10,000+ unique high quality 3D instances of furniture with high resolution informative textures developed by professional designers.
Fusion 360 Gallery Dataset (2020) [Link][Paper]
The Fusion 360 Gallery Dataset contains rich 2D and 3D geometry data derived from parametric CAD models. The Reconstruction Dataset provides sequential construction sequence information from a subset of simple 'sketch and extrude' designs. The Segmentation Dataset provides a segmentation of 3D models based on the CAD modeling operation, including B-Rep format, mesh, and point cloud.
Combinatorial 3D Shape Dataset (2020) [Link][Paper]
Combinatorial 3D Shape Dataset is composed of 406 instances of 14 classes. Each object in our dataset is considered equivalent to a sequence of primitive placement. Compared to other 3D object datasets, our proposed dataset contains an assembling sequence of unit primitives. It implies that we can quickly obtain a sequential generation process that is a human assembling mechanism. Furthermore, we can sample valid random sequences from a given combinatorial shape after validating the sampled sequences. To sum up, the characteristics of our combinatorial 3D shape dataset are (i) combinatorial, (ii) sequential, (iii) decomposable, and (iv) manipulable.
SUNRGB-D 3D Object Detection Challenge [Link]
19 object categories for predicting a 3D bounding box in real world dimension
Training set: 10,355 RGB-D scene images, Testing set: 2860 RGB-D images
SceneNN (2016) [Link]
100+ indoor scene meshes with per-vertex and per-pixel annotation.
ScanNet (2017) [Link]
An RGB-D video dataset containing 2.5 million views in more than 1500 scans, annotated with 3D camera poses, surface reconstructions, and instance-level semantic segmentations.
Matterport3D: Learning from RGB-D Data in Indoor Environments (2017) [Link]
10,800 panoramic views (in both RGB and depth) from 194,400 RGB-D images of 90 building-scale scenes of private rooms. Instance-level semantic segmentations are provided for region (living room, kitchen) and object (sofa, TV) categories.
SUNCG: A Large 3D Model Repository for Indoor Scenes (2017) [Link]
The dataset contains over 45K different scenes with manually created realistic room and furniture layouts. All of the scenes are semantically annotated at the object level.
MINOS: Multimodal Indoor Simulator (2017) [Link]
MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environments. MINOS leverages large datasets of complex 3D environments and supports flexible configuration of multimodal sensor suites. MINOS supports SUNCG and Matterport3D scenes.
Facebook House3D: A Rich and Realistic 3D Environment (2017) [Link]
House3D is a virtual 3D environment which consists of 45K indoor scenes equipped with a diverse set of scene types, layouts and objects sourced from the SUNCG dataset. All 3D objects are fully annotated with category labels. Agents in the environment have access to observations of multiple modalities, including RGB images, depth, segmentation masks and top-down 2D map views.
HoME: a Household Multimodal Environment (2017) [Link]
HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning.
AI2-THOR: Photorealistic Interactive Environments for AI Agents [Link]
AI2-THOR is a photo-realistic interactable framework for AI agents. There are a total 120 scenes in version 1.0 of the THOR environment covering four different room categories: kitchens, living rooms, bedrooms, and bathrooms. Each room has a number of actionable objects.
Gibson Environment: Real-World Perception for Embodied Agents (2018 CVPR) [Link]
This platform provides RGB from 1000 point clouds, as well as multimodal sensor data: surface normal, depth, and for a fraction of the spaces, semantics object annotations. The environment is also RL ready with physics integrated. Using such datasets can further narrow down the discrepency between virtual environment and real world.
InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset [Link]
System Overview: an end-to-end pipeline to render an RGB-D-inertial benchmark for large scale interior scene understanding and mapping. Our dataset contains 20M images created by pipeline: (A) We collect around 1 million CAD models provided by world-leading furniture manufacturers. These models have been used in the real-world production. (B) Based on those models, around 1,100 professional designers create around 22 million interior layouts. Most of such layouts have been used in real-world decorations. (C) For each layout, we generate a number of configurations to represent different random lightings and simulation of scene change over time in daily life. (D) We provide an interactive simulator (ViSim) to help for creating ground truth IMU, events, as well as monocular or stereo camera trajectories including hand-drawn, random walking and neural network based realistic trajectory. (E) All supported image sequences and ground truth.
Large-Scale Point Cloud Classification Benchmark, which provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total, and also covers a range of diverse urban scenes.
Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling [Link]
3D-FRONT: 3D Furnished Rooms with layOuts and semaNTics [Link]
Contains 10,000 houses (or apartments) and ~70,000 rooms with layout information.
3ThreeDWorld(TDW): A High-Fidelity, Multi-Modal Platform for Interactive Physical Simulation [Link]
MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis [Link]
Viewpoints and Keypoints (2015) [Paper]
Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views (2015 ICCV) [Paper]
PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization (2015) [Paper]
Modeling Uncertainty in Deep Learning for Camera Relocalization (2016) [Paper]
Robust camera pose estimation by viewpoint classification using deep learning (2016) [Paper]
Image-based localization using lstms for structured feature correlation (2017 ICCV) [Paper]
Image-Based Localization Using Hourglass Networks (2017 ICCV Workshops) [Paper]
Geometric loss functions for camera pose regression with deep learning (2017 CVPR) [Paper]
Generic 3D Representation via Pose Estimation and Matching (2017) [Paper]
3D Bounding Box Estimation Using Deep Learning and Geometry (2017) [Paper]
6-DoF Object Pose from Semantic Keypoints (2017) [Paper]
Relative Camera Pose Estimation Using Convolutional Neural Networks (2017) [Paper]
3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions (2017) [Paper]
Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction (2018 CVPR) [Paper]
PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes (2018) [Paper]
Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images (2018 CVPR) [Paper]
Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling (2018 CVPR) [Paper]
3D Pose Estimation and 3D Model Retrieval for Objects in the Wild (2018 CVPR) [Paper]
Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects (2018) [Paper]
Object Detection in 3D Scenes Using CNNs in Multi-view Images (2016) [Paper]
DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding (2016) [Paper]
SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite (2017) [Paper]
VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection (2017) [Paper]
Frustum PointNets for 3D Object Detection from RGB-D Data (CVPR2018) [Paper]
A^2-Net: Molecular Structure Estimation from Cryo-EM Density Volumes (AAAI2019) [Paper]
Stereo R-CNN based 3D Object Detection for Autonomous Driving (CVPR2019) [Paper]
Unsupervised Co-Segmentation of a Set of Shapes via Descriptor-Space Spectral Clustering (2011) [Paper]
Learning Hierarchical Shape Segmentation and Labeling from Online Repositories (2017) [Paper]
We propose pointwise convolution that performs on-the-fly voxelization for learning local features of a point cloud.
We propose an efficient yet robust technique for on-the-fly dense reconstruction and semantic segmentation of 3D indoor scenes. Our method is built atop an efficient super-voxel clustering method and a conditional random field with higher-order constraints from structural and object cues, enabling progressive dense semantic segmentation without any precomputation.
We jointly address the problems of semantic and instance segmentation of 3D point clouds with a multi-task pointwise network that simultaneously performs two tasks: predicting the semantic classes of 3D points and embedding the points into high-dimensional vectors so that points of the same object instance are represented by similar embeddings. We then propose a multi-value conditional random field model to incorporate the semantic and instance labels and formulate the problem of semantic and instance segmentation as jointly optimising labels in the field model.
We propose an efficient end-to-end permutation invariant convolution for point cloud deep learning. We use statistics from concentric spherical shells to define representative features and resolve the point order ambiguity, allowing traditional convolution to perform efficiently on such features.
We introduce a novel convolution operator for point clouds that achieves rotation invariance. Our core idea is to use low-level rotation invariant geometric features such as distances and angles to design a convolution operator for point cloud learning.
FLAME: Faces Learned with an Articulated Model and Expressions (2017) [Paper][Code (Chumpy)][Code (TF)] [Code (PyTorch)]
FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. The model combines a linear identity shape space (trained from 3800 scans of human heads) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. The code demonstrates how to 1) reconstruct textured 3D faces from images, 2) fit the model to 3D landmarks or registered 3D meshes, or 3) generate 3D face templates for speech-driven facial animation.
The Space of Human Body Shapes: Reconstruction and Parameterization from Range Scans (2003) [Paper]
Category-Specific Object Reconstruction from a Single Image (2014) [Paper]
Model Composition from Interchangeable Components (2007) [Paper]
Data-Driven Suggestions for Creativity Support in 3D Modeling (2010) [Paper]
Photo-Inspired Model-Driven 3D Object Modeling (2011) [Paper]
Probabilistic Reasoning for Assembly-Based 3D Modeling (2011) [Paper]
A Probabilistic Model for Component-Based Shape Synthesis (2012) [Paper]
Structure Recovery by Part Assembly (2012) [Paper]
Fit and Diverse: Set Evolution for Inspiring 3D Shape Galleries (2012) [Paper]
AttribIt: Content Creation with Semantic Attributes (2013) [Paper]
Learning Part-based Templates from Large Collections of 3D Shapes (2013) [Paper]
Topology-Varying 3D Shape Creation via Structural Blending (2014) [Paper]
Estimating Image Depth using Shape Collections (2014) [Paper]
Single-View Reconstruction via Joint Analysis of Image and Shape Collections (2015) [Paper]
Interchangeable Components for Hands-On Assembly Based Modeling (2016) [Paper]
Shape Completion from a Single RGBD Image (2016) [Paper]
An energy-based 3D shape descriptor network is a deep energy-based model for volumetric shape patterns. The maximum likelihood training of the model follows an “analysis by synthesis” scheme and can be interpreted as a mode seeking and mode shifting process. The model can synthesize 3D shape patterns by sampling from the probability distribution via MCMC such as Langevin dynamics. Experiments demonstrate that the proposed model can generate realistic 3D shape patterns and can be useful for 3D shape analysis.
CoMA: Convolutional Mesh Autoencoders (2018) [Paper][Code (TF)][Code (PyTorch)][Code (PyTorch)]
CoMA is a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. CoMA introduces mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model.
VOCA: Voice Operated Character Animation (2019) [Paper][Video][Code]
VOCA is a simple and generic speech-driven facial animation framework that works across a range of identities. The codebase demonstrates how to synthesize realistic character animations given an arbitrary speech signal and a static character mesh.
This paper proposes a deep 3D energy-based model to represent volumetric shapes. The maximum likelihood training of the model follows an “analysis by synthesis” scheme. Experiments demonstrate that the proposed model can generate high-quality 3D shape patterns and can be useful for a wide variety of 3D shape analysis.
Generative PointNet is an energy-based model of unordered point clouds, where the energy function is parameterized by an input-permutation-invariant bottom-up neural network. The model can be trained by MCMC-based maximum likelihood learning, or a short-run MCMC toward the energy-based model as a flow-like generator for point cloud reconstruction and interpolation. The learned point cloud representation can be useful for point cloud classification.
Shape My Face (SMF) is a point cloud to mesh auto-encoder for the registration of raw human face scans, and the generation of synthetic human faces. SMF leverages a modified PointNet encoder with a visual attention module and differentiable surface sampling to be independent of the original surface representation and reduce the need for pre-processing. Mesh convolution decoders are combined with a specialized PCA model of the mouth, and smoothly blended based on geodesic distances, to create a compact model that is highly robust to noise. SMF is applied to register and perform expression transfer on scans captured in-the-wild with an iPhone depth camera represented either as meshes or point clouds.
Two-Shot SVBRDF Capture for Stationary Materials (SIGGRAPH 2015) [Paper]
Reflectance Modeling by Neural Texture Synthesis (2016) [Paper]
Modeling Surface Appearance from a Single Photograph using Self-augmented Convolutional Neural Networks (2017) [Paper]
High-Resolution Multi-Scale Neural Texture Synthesis (2017) [Paper]
Reflectance and Natural Illumination from Single Material Specular Objects Using Deep Learning (2017) [Paper]
Joint Material and Illumination Estimation from Photo Sets in the Wild (2017) [Paper]
JWhat Is Around The Camera? (2017) [Paper]
TextureGAN: Controlling Deep Image Synthesis with Texture Patches (2018 CVPR) [Paper]
Gaussian Material Synthesis (2018 SIGGRAPH) [Paper]
Non-stationary Texture Synthesis by Adversarial Expansion (2018 SIGGRAPH) [Paper]
Synthesized Texture Quality Assessment via Multi-scale Spatial and Statistical Texture Attributes of Image and Gradient Magnitude Coefficients (2018 CVPR) [Paper]
LIME: Live Intrinsic Material Estimation (2018 CVPR) [Paper]
Single-Image SVBRDF Capture with a Rendering-Aware Deep Network (2018) [Paper]
PhotoShape: Photorealistic Materials for Large-Scale Shape Collections (2018) [Paper]
Learning Material-Aware Local Descriptors for 3D Shapes (2018) [Paper]
FrankenGAN: Guided Detail Synthesis for Building Mass Models using Style-Synchonized GANs (2018 SIGGRAPH Asia) [Paper]
Design Preserving Garment Transfer (2012) [Paper]
Analogy-Driven 3D Style Transfer (2014) [Paper]
Unsupervised Texture Transfer from Images to Model Collections (2016) [Paper]
Learning Detail Transfer based on Geometric Features (2017) [Paper]
Co-Locating Style-Defining Elements on 3D Shapes (2017) [Paper]
Appearance Modeling via Proxy-to-Image Alignment (2018) [Paper]
Automatic Unpaired Shape Deformation Transfer (SIGGRAPH Asia 2018) [Paper]
Interactive Furniture Layout Using Interior Design Guidelines (2011) [Paper]
Synthesizing Open Worlds with Constraints using Locally Annealed Reversible Jump MCMC (2012) [Paper]
Example-based Synthesis of 3D Object Arrangements (2012 SIGGRAPH Asia) [Paper]
Sketch2Scene: Sketch-based Co-retrieval and Co-placement of 3D Models (2013) [Paper]
Action-Driven 3D Indoor Scene Evolution (2016) [Paper]
The Clutterpalette: An Interactive Tool for Detailing Indoor Scenes (2015) [Paper]
Image2Scene: Transforming Style of 3D Room (2015) [Paper]
Relationship Templates for Creating Scene Variations (2016) [Paper]
IM2CAD (2017) [Paper]
Predicting Complete 3D Models of Indoor Scenes (2017) [Paper]
Complete 3D Scene Parsing from Single RGBD Image (2017) [Paper]
Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks for 3D Multi Object Scenes (2017) [Blog]
Adaptive Synthesis of Indoor Scenes via Activity-Associated Object Relation Graphs (2017 SIGGRAPH Asia) [Paper]
Automated Interior Design Using a Genetic Algorithm (2017) [Paper]
SceneSuggest: Context-driven 3D Scene Design (2017) [Paper]
A fully end-to-end deep learning approach for real-time simultaneous 3D reconstruction and material recognition (2017) [Paper]
Deep Convolutional Priors for Indoor Scene Synthesis (2018) [Paper]
Configurable 3D Scene Synthesis and 2D Image Rendering with Per-Pixel Ground Truth using Stochastic Grammars (2018) [Paper]
Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image (ECCV 2018) [Paper]
Language-Driven Synthesis of 3D Scenes from Scene Databases (SIGGRAPH Asia 2018) [Paper]
Deep Generative Modeling for Scene Synthesis via Hybrid Representations (2018) [Paper]
GRAINS: Generative Recursive Autoencoders for INdoor Scenes (2018) [Paper]
SEETHROUGH: Finding Objects in Heavily Occluded Indoor Scene Images (2018) [Paper]
A Survey of 3D Indoor Scene Synthesis (2020) [Paper]
SceneCAD: Predicting Object Alignments and Layouts in RGB-D Scans (2020) [Paper]
Recovering the Spatial Layout of Cluttered Rooms (2009) [Paper]
Characterizing Structural Relationships in Scenes Using Graph Kernels (2011 SIGGRAPH) [Paper]
Understanding Indoor Scenes Using 3D Geometric Phrases (2013) [Paper]
Organizing Heterogeneous Scene Collections through Contextual Focal Points (2014 SIGGRAPH) [Paper]
SceneGrok: Inferring Action Maps in 3D Environments (2014, SIGGRAPH) [Paper]
PanoContext: A Whole-room 3D Context Model for Panoramic Scene Understanding (2014) [Paper]
Learning Informative Edge Maps for Indoor Scene Layout Prediction (2015) [Paper]
Rent3D: Floor-Plan Priors for Monocular Layout Estimation (2015) [Paper]
A Coarse-to-Fine Indoor Layout Estimation (CFILE) Method (2016) [Paper]
DeLay: Robust Spatial Layout Estimation for Cluttered Indoor Scenes (2016) [Paper]
Deep Multi-Modal Image Correspondence Learning (2016) [Paper]
RoomNet: End-to-End Room Layout Estimation (2017) [Paper]
SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite (2017) [Paper]
Cross-Domain Self-supervised Multi-task Feature Learning using Synthetic Imagery (2018 CVPR) [Paper]
Pano2CAD: Room Layout From A Single Panorama Image (2018 CVPR) [Paper]
Automatic 3D Indoor Scene Modeling from Single Panorama (2018 CVPR) [Paper]
PerspectiveNet: 3D Object Detection from a Single RGB Image via Perspective Points (NIPS 2019) [Paper]
Holistic++ Scene Understanding: Single-view 3D Holistic Scene Parsing and Human Pose Estimation with Human-Object Interaction and Physical Commonsense (ICCV 2019) [Paper & Code]