This code computes a color function that maps positions to colors. It is used to compute per vertex colors of a 3D reconstruction where color observations are available. As input you have to provide position-maps of the target scene along with the color observations. The 'color function' is a per position MLP. We use the positional encoding proposed by NeRF (Arxiv) as well as the basic MLP network structure.
Given position maps you can start training. In the figure below you see the training curve for a scan with 89 images (from left to right: position input, predicted texture, ground truth).
Inference is done on a vertex level. You can increase the sample rate via subdivision.
This code is based on the Pix2Pix/CycleGAN framework (Github repo) and the NeuralTexGen project (Github repo). The positional encoding is based on the implementation of nerf-pytorch (Github repo). Data of Matterport3D (Github repo) is used for demonstration.