- Multi-view Relighting using a Geometry-Aware Network
- GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering
- IBRNet: Learning Multi-View Image-Based Rendering
- NeuTex: Neural Texture Mapping for Volumetric Neural Rendering
- NeX: Real-time View Synthesis with Neural Basis Expansion
- Neural Scene Graphs for Dynamic Scenes
- Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
- NeRF−−: Neural Radiance Fields Without Known Camera Parameters
- Learning an Animatable Detailed 3D Face Model from In-The-Wild Images
- RGBD-Net: Predicting color and depth images for novel views synthesis
- Monocular Differentiable Rendering for Self-Supervised 3D Object Detection
- AutoInt: Automatic Integration for Fast Neural Volume Rendering
- Deep Parametric Indoor Lighting Estimation
- Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
- Continuous Surface Embeddings
- Learned Initializations for Optimizing Coordinate-Based Neural Representations
- A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering
- Photorealistic Audio-driven Video Portraits
- NeRF++: Analyzing and Improving Neural Radiance Fields
- Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image
- Extreme View Synthesis
- Deep Multi Depth Panoramas for View Synthesis
- Deep Multi Depth Panoramas for View Synthesis
- Neural Lumigraph Rendering
- HeadGAN: Video-and-Audio-Driven Talking Head Synthesis
- Generative View Synthesis: From Single-view Semantics to Novel-view Images
- Single-Shot Freestyle Dance Reenactment
- MakeItTalk: Speaker-Aware Talking-Head Animation
- Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations
- Pix2Shape: Towards Unsupervised Learning of 3D Scenes from Images using a View-based Representation
- NeRD: Neural Reflectance Decomposition from Image Collections
- Mixture of Volumetric Primitives for Efficient Neural Rendering
- Mesh Guided One-shot Face Reenactment Using Graph Convolutional Networks
- Object-based Illumination Estimation with Rendering-aware Neural Networks
- Deformable Neural Radiance Fields
- StyleUV: Diverse and High-quality UV Map Generative Model
- Neural Re-Rendering of Humans from a Single Image
- Portrait Neural Radiance Fields from a Single Image
- NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis
- Modular Primitives for High-Performance Differentiable Rendering
- Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
- Equivariant Multi-View Networks
- NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
- Learning Illumination from Diverse Portraits
- Leveraging 2D Data to Learn Textured 3D Mesh Generation
- Neural Sparse Voxel Fields
- Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
- DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using Depth Oracle Networks
- Self-supervised Learning of 3D Objects from Natural Images
- PolyGen: An Autoregressive Generative Model of 3D Meshes
- Learning to Shadow Hand-drawn Sketches
- Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
- Learning 3D Part Assembly from a Single Image
- Image Animation with Perturbed Masks
- GramGAN: Deep 3D Texture Synthesis From 2D Exemplars
- Stable View Synthesis
- Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation
- DualSDF: Semantic Shape Manipulation using a Two-Level Representation
- Deferred Neural Rendering: Image Synthesis using Neural Textures
- Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
- Transposer: Universal Texture Synthesis Using Feature Maps as Transposed Convolution Filter
- Neural Contours: Learning to Draw Lines from 3D Shapes
- ShaRF: Shape-conditioned Radiance Fields from a Single View
- Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence
- Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination
- Deep Shading: Convolutional Neural Networks for Screen-Space Shading
- MatryODShka: Real-time 6DoF Video View Synthesis using Multi-Sphere Images
- Revealing Scenes by Inverting Structure from Motion Reconstructions
- C3DPO: Canonical 3D Pose Networks for Non-Rigid Structure From Motion
- Layered Neural Rendering for Retiming People in Video
- Bowtie Networks: Generative Modeling for Joint Few-Shot Recognition and Novel-View Synthesis
- PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations
- Neural Rerendering in the Wild
- Neural Head Reenactment with Latent Pose Descriptors
- Polarimetric Multi-View Inverse Rendering
- One-Shot Identity-Preserving Portrait Reenactment
- LIMP: Learning Latent Shape Representations with Metric Preservation Priors
- A Neural Rendering Framework for Free-Viewpoint Relighting
- Neural Volumes: Learning Dynamic Renderable Volumes from Images
- NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
- Equivariant Neural Rendering
- Neural Unsigned Distance Fields for Implicit Function Learning
- iNeRF: Inverting Neural Radiance Fields for Pose Estimation
- Extending DeepSDF for automatic 3D shape retrieval and similarity transform estimation
- TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting
- Wasserstein Generative Models for Patch-based Texture Synthesis
- Learning a Neural 3D Texture Space from 2D Exemplars
- A Free Viewpoint Portrait Generator with Dynamic Styling
- Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines
- Speech Driven Talking Face Generation from a Single Image and an Emotion Condition
- Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image Synthesis
- Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Networks
- Monocular Real-Time Volumetric Performance Capture
- Neural Point-Based Graphics
- Texture Fields: Learning Texture Representations in Function Space
- GPU-Accelerated Mobile Multi-view Style Transfer
- One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing
- Local Deep Implicit Functions for 3D Shape
- Progressive Pose Attention Transfer for Person Image Generation
- Learning View Priors for Single-view 3D Reconstruction
- Neural Hair Rendering
- DeepSurfels: Learning Online Appearance Fusion
- Portrait Shadow Manipulation
- Image-guided Neural Object Rendering
- Implicit Neural Representations with Periodic Activation Functions
- GLoSH: Global-Local Spherical Harmonics for Intrinsic Image Decomposition
- Text-based Editing of Talking-head Video
- Learning Character-Agnostic Motion for Motion Retargeting in 2D
- DRWR: A Differentiable Renderer without Rendering for Unsupervised 3D Structure Learning from Silhouette Images
- Human Motion Transfer from Poses in the Wild
- DeepVoxels: Learning Persistent 3D Feature Embeddings
- Articulation-aware Canonical Surface Mapping
- Deep Geometric Texture Synthesis
- DeRF: Decomposed Radiance Fields
- LOGAN: Unpaired Shape Transform in Latent Overcomplete Space
- pixelNeRF: Neural Radiance Fields from One or Few Images
- Deep Radiance Caching: Convolutional Autoencoders Deeper in Ray Tracing
- Neural 3D Video Synthesis
- Learning to Generate Diverse Dance Motions with Transformer
- Adversarial Texture Optimization from RGB-D Scans
- FACEGAN: Facial Attribute Controllable rEenactment GAN
- Deep View Synthesis via Self-Consistent Generative Network
- Semantic Image Synthesis with Spatially-Adaptive Normalization
- Illumination Decomposition for Photograph with Multiple Light Sources
- Towards Geometry Guided Neural Relighting with Flash Photography
- Canonical Surface Mapping via Geometric Cycle Consistency
- GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
- Learning Inverse Rendering of Faces from Real-world Videos
- Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images
- X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation
- UnrealText: Synthesizing Realistic Scene Text Images from the UnrealWorld
- Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths from a Monocular Camera
- Learning elementary structures for 3D shape generation and matching
- Neural Re-rendering for Full-frame Video Stabilization
- Novel View Synthesis via Depth-guided Skip Connections
- Let There Be Color! Large-Scale Texturing of 3D Reconstructions
- Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision
- Unsupervised 3D Learning for Shape Analysis via Multiresolution Instance Discrimination
- GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer
- Action2Motion: Conditioned Generation of 3D Human Motions
- Fast Spatially-Varying Indoor Lighting Estimation
- Everybody's Talkin': Let Me Talk as You Want
- Textured Neural Avatars
- AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation
- FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis
- Cross-Camera Convolutional Color Constancy
- Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer
- High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
- StyleRig: Rigging StyleGAN for 3D Control over Portrait Images
- Invertible Neural BRDF for Object Inverse Rendering
- Transformable Bottleneck Networks
- Learning Implicit Fields for Generative Shape Modeling
- Volumetric Correspondence Networks for Optical Flow
- Large-scale multilingual audio visual dubbing
- HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
- Occlusion-aware 3D Morphable Models and an Illumination Prior for Face Image Analysis
- BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images
- Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes
- AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis
- Volumetric Capture of Humans with a Single RGBD Camera via Semi-Parametric Learning
- ID-Unet: Iterative Soft and Hard Deformation for View Synthesis
- Semantic Bottleneck Scene Generation
- Non-line-of-Sight Imaging via Neural Transient Fields
- View Independent Generative Adversarial Network for Novel View Synthesis
- Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation
- State of the Art on Neural Rendering
- FaR-GAN for One-Shot Face Reenactment
- Monocular Neural Image Based Rendering with Continuous View Control
- GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields
- NuI-Go: Recursive Non-Local Encoder-Decoder Network for Retinal Image Non-Uniform Illumination Removal
- Single-View View Synthesis with Multiplane Images
- Novel View Synthesis on Unpaired Data by Conditional Deformable Variational Auto-Encoder
- CoReNet: Coherent 3D scene reconstruction from a single RGB image
- Rotationally-Temporally Consistent Novel View Synthesis of Human Performance Video
- Unsupervised Novel View Synthesis from a Single Image
- A Recurrent Transformer Network for Novel View Action Synthesis
- Occupancy Networks: Learning 3D Reconstruction in Function Space
- Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction
- InverseRenderNet: Learning single image inverse rendering
- AUTO3D: Novel view synthesis through unsupervisely learned variational viewpoint and global 3D representation
- Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video
- Texture Mapping for 3D Reconstruction with RGB-D Sensor
- DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
- Generative Adversarial Networks in Human Emotion Synthesis: A Review
- Learning to Factorize and Relight a City
- Texture Mapping for 3D Reconstruction with RGB-D Sensor
- Better Patch Stitching for Parametric Surface Reconstruction
- Vid2Actor: Free-viewpoint Animatable Person Synthesis from Video in the Wild
- Object-Centric Neural Scene Rendering
- Towards Automatic Face-to-Face Translation
- Neural Light Transport for Relighting and View Synthesis
- Learning to Predict Indoor Illumination from a Single Image
- DSRN: an Efficient Deep Network for Image Relighting
- FastNeRF: High-Fidelity Neural Rendering at 200FPS
- Neural Radiance Flow for 4D View Synthesis and Video Processing
- Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild
- Relightable 3D Head Portraits from a Smartphone Video
- Curriculum DeepSDF
- Single Image Portrait Relighting
- Free View Synthesis
- Space-time Neural Irradiance Fields for Free-Viewpoint Video
- Light Stage Super-Resolution: Continuous High-Frequency Relighting
- NiLBS: Neural Inverse Linear Blend Skinning
- Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF from a Single Image
- Deep Single-Image Portrait Relighting
- High-Fidelity Neural Human Motion Transfer from Monocular Video
- Learning Compositional Radiance Fields of Dynamic Human Heads
- Neural Volume Rendering: NeRF And Beyond
- ReenactNet: Real-time Full Head Reenactment
- Neural State Machine for Character-Scene Interactions
-
Notifications
You must be signed in to change notification settings - Fork 0
manjunath5496/Neural-Rendering-Papers
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
"Data Scientists should recall innovation often times is not providing fancy algorithms, but rather value to the customer."― Damian Mingle
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published