Skip to content

Commit

Permalink
Add the OverfitModel
Browse files Browse the repository at this point in the history
Summary:
Introduces the OverfitModel for NeRF-style training with overfitting to one scene.
It is a specific case of GenericModel. It has been disentangle to ease usage.

## General modification

1. Modularize a minimum GenericModel to introduce OverfitModel
2. Introduce OverfitModel and ensure through unit testing that it behaves like GenericModel.

## Modularization

The following methods have been extracted from GenericModel to allow modularity with ManyViewModel:
- get_objective is now a call to weighted_sum_losses
- log_loss_weights
- prepare_inputs

The generic methods have been moved to an utils.py file.

Simplify the code to introduce OverfitModel.

Private methods like chunk_generator are now public and can now be used by ManyViewModel.

Reviewed By: shapovalov

Differential Revision: D43771992

fbshipit-source-id: 6102aeb21c7fdd56aa2ff9cd1dd23fd9fbf26315
  • Loading branch information
EmGarr authored and facebook-github-bot committed Mar 24, 2023
1 parent 7d8b029 commit 813e941
Show file tree
Hide file tree
Showing 16 changed files with 2,012 additions and 201 deletions.
39 changes: 38 additions & 1 deletion projects/implicitron_trainer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ The main object for this trainer loop is `Experiment`. It has four top-level rep
* `data_source`: This is a `DataSourceBase` which defaults to `ImplicitronDataSource`.
It constructs the data sets and dataloaders.
* `model_factory`: This is a `ModelFactoryBase` which defaults to `ImplicitronModelFactory`.
It constructs the model, which is usually an instance of implicitron's main `GenericModel` class, and can load its weights from a checkpoint.
It constructs the model, which is usually an instance of `OverfitModel` (for NeRF-style training with overfitting to one scene) or `GenericModel` (that is able to generalize to multiple scenes by NeRFormer-style conditioning on other scene views), and can load its weights from a checkpoint.
* `optimizer_factory`: This is an `OptimizerFactoryBase` which defaults to `ImplicitronOptimizerFactory`.
It constructs the optimizer and can load its weights from a checkpoint.
* `training_loop`: This is a `TrainingLoopBase` which defaults to `ImplicitronTrainingLoop` and defines the main training loop.
Expand Down Expand Up @@ -292,6 +292,43 @@ model_GenericModel_args: GenericModel
╘== ReductionFeatureAggregator
```
Here is the class structure of OverfitModel:
```
model_OverfitModel_args: OverfitModel
└-- raysampler_*_args: RaySampler
╘== AdaptiveRaysampler
╘== NearFarRaysampler
└-- renderer_*_args: BaseRenderer
╘== MultiPassEmissionAbsorptionRenderer
╘== LSTMRenderer
╘== SignedDistanceFunctionRenderer
└-- ray_tracer_args: RayTracing
└-- ray_normal_coloring_network_args: RayNormalColoringNetwork
└-- implicit_function_*_args: ImplicitFunctionBase
╘== NeuralRadianceFieldImplicitFunction
╘== SRNImplicitFunction
└-- raymarch_function_args: SRNRaymarchFunction
└-- pixel_generator_args: SRNPixelGenerator
╘== SRNHyperNetImplicitFunction
└-- hypernet_args: SRNRaymarchHyperNet
└-- pixel_generator_args: SRNPixelGenerator
╘== IdrFeatureField
└-- coarse_implicit_function_*_args: ImplicitFunctionBase
╘== NeuralRadianceFieldImplicitFunction
╘== SRNImplicitFunction
└-- raymarch_function_args: SRNRaymarchFunction
└-- pixel_generator_args: SRNPixelGenerator
╘== SRNHyperNetImplicitFunction
└-- hypernet_args: SRNRaymarchHyperNet
└-- pixel_generator_args: SRNPixelGenerator
╘== IdrFeatureField
```
OverfitModel has been introduced to create a simple class to disantagle Nerfs which the overfit pattern
from the GenericModel.
Please look at the annotations of the respective classes or functions for the lists of hyperparameters.
`tests/experiment.yaml` shows every possible option if you have no user-defined classes.
Expand Down
79 changes: 79 additions & 0 deletions projects/implicitron_trainer/configs/overfit_base.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
defaults:
- default_config
- _self_
exp_dir: ./data/exps/overfit_base/
training_loop_ImplicitronTrainingLoop_args:
visdom_port: 8097
visualize_interval: 0
max_epochs: 1000
data_source_ImplicitronDataSource_args:
data_loader_map_provider_class_type: SequenceDataLoaderMapProvider
dataset_map_provider_class_type: JsonIndexDatasetMapProvider
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
dataset_length_train: 1000
dataset_length_val: 1
num_workers: 8
dataset_map_provider_JsonIndexDatasetMapProvider_args:
dataset_root: ${oc.env:CO3D_DATASET_ROOT}
n_frames_per_sequence: -1
test_on_train: true
test_restrict_sequence_id: 0
dataset_JsonIndexDataset_args:
load_point_clouds: false
mask_depths: false
mask_images: false
model_factory_ImplicitronModelFactory_args:
model_class_type: "OverfitModel"
model_OverfitModel_args:
loss_weights:
loss_mask_bce: 1.0
loss_prev_stage_mask_bce: 1.0
loss_autodecoder_norm: 0.01
loss_rgb_mse: 1.0
loss_prev_stage_rgb_mse: 1.0
output_rasterized_mc: false
chunk_size_grid: 102400
render_image_height: 400
render_image_width: 400
share_implicit_function_across_passes: false
implicit_function_class_type: "NeuralRadianceFieldImplicitFunction"
implicit_function_NeuralRadianceFieldImplicitFunction_args:
n_harmonic_functions_xyz: 10
n_harmonic_functions_dir: 4
n_hidden_neurons_xyz: 256
n_hidden_neurons_dir: 128
n_layers_xyz: 8
append_xyz:
- 5
coarse_implicit_function_class_type: "NeuralRadianceFieldImplicitFunction"
coarse_implicit_function_NeuralRadianceFieldImplicitFunction_args:
n_harmonic_functions_xyz: 10
n_harmonic_functions_dir: 4
n_hidden_neurons_xyz: 256
n_hidden_neurons_dir: 128
n_layers_xyz: 8
append_xyz:
- 5
raysampler_AdaptiveRaySampler_args:
n_rays_per_image_sampled_from_mask: 1024
scene_extent: 8.0
n_pts_per_ray_training: 64
n_pts_per_ray_evaluation: 64
stratified_point_sampling_training: true
stratified_point_sampling_evaluation: false
renderer_MultiPassEmissionAbsorptionRenderer_args:
n_pts_per_ray_fine_training: 64
n_pts_per_ray_fine_evaluation: 64
append_coarse_samples_to_fine: true
density_noise_std_train: 1.0
optimizer_factory_ImplicitronOptimizerFactory_args:
breed: Adam
weight_decay: 0.0
lr_policy: MultiStepLR
multistep_lr_milestones: []
lr: 0.0005
gamma: 0.1
momentum: 0.9
betas:
- 0.9
- 0.999
42 changes: 42 additions & 0 deletions projects/implicitron_trainer/configs/overfit_singleseq_base.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
defaults:
- overfit_base
- _self_
data_source_ImplicitronDataSource_args:
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
batch_size: 1
dataset_length_train: 1000
dataset_length_val: 1
num_workers: 8
dataset_map_provider_JsonIndexDatasetMapProvider_args:
assert_single_seq: true
n_frames_per_sequence: -1
test_restrict_sequence_id: 0
test_on_train: false
model_factory_ImplicitronModelFactory_args:
model_class_type: "OverfitModel"
model_OverfitModel_args:
render_image_height: 800
render_image_width: 800
log_vars:
- loss_rgb_psnr_fg
- loss_rgb_psnr
- loss_eikonal
- loss_prev_stage_rgb_psnr
- loss_mask_bce
- loss_prev_stage_mask_bce
- loss_rgb_mse
- loss_prev_stage_rgb_mse
- loss_depth_abs
- loss_depth_abs_fg
- loss_kl
- loss_mask_neg_iou
- objective
- epoch
- sec/it
optimizer_factory_ImplicitronOptimizerFactory_args:
lr: 0.0005
multistep_lr_milestones:
- 200
- 300
training_loop_ImplicitronTrainingLoop_args:
max_epochs: 400
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
defaults:
- overfit_singleseq_base
- _self_
exp_dir: "./data/overfit_nerf_blender_repro/${oc.env:BLENDER_SINGLESEQ_CLASS}"
data_source_ImplicitronDataSource_args:
data_loader_map_provider_SequenceDataLoaderMapProvider_args:
dataset_length_train: 100
dataset_map_provider_class_type: BlenderDatasetMapProvider
dataset_map_provider_BlenderDatasetMapProvider_args:
base_dir: ${oc.env:BLENDER_DATASET_ROOT}/${oc.env:BLENDER_SINGLESEQ_CLASS}
n_known_frames_for_test: null
object_name: ${oc.env:BLENDER_SINGLESEQ_CLASS}
path_manager_factory_class_type: PathManagerFactory
path_manager_factory_PathManagerFactory_args:
silence_logs: true

model_factory_ImplicitronModelFactory_args:
model_class_type: "OverfitModel"
model_OverfitModel_args:
mask_images: false
raysampler_class_type: AdaptiveRaySampler
raysampler_AdaptiveRaySampler_args:
n_pts_per_ray_training: 64
n_pts_per_ray_evaluation: 64
n_rays_per_image_sampled_from_mask: 4096
stratified_point_sampling_training: true
stratified_point_sampling_evaluation: false
scene_extent: 2.0
scene_center:
- 0.0
- 0.0
- 0.0
renderer_MultiPassEmissionAbsorptionRenderer_args:
density_noise_std_train: 0.0
n_pts_per_ray_fine_training: 128
n_pts_per_ray_fine_evaluation: 128
raymarcher_EmissionAbsorptionRaymarcher_args:
blend_output: false
loss_weights:
loss_rgb_mse: 1.0
loss_prev_stage_rgb_mse: 1.0
loss_mask_bce: 0.0
loss_prev_stage_mask_bce: 0.0
loss_autodecoder_norm: 0.00

optimizer_factory_ImplicitronOptimizerFactory_args:
exponential_lr_step_size: 3001
lr_policy: LinearExponential
linear_exponential_lr_milestone: 200

training_loop_ImplicitronTrainingLoop_args:
max_epochs: 6000
metric_print_interval: 10
store_checkpoints_purge: 3
test_when_finished: true
validation_interval: 100
2 changes: 1 addition & 1 deletion projects/implicitron_trainer/experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@
DataSourceBase,
ImplicitronDataSource,
)
from pytorch3d.implicitron.models.generic_model import ImplicitronModelBase
from pytorch3d.implicitron.models.base_model import ImplicitronModelBase

from pytorch3d.implicitron.models.renderer.multipass_ea import (
MultiPassEmissionAbsorptionRenderer,
Expand Down
Loading

0 comments on commit 813e941

Please sign in to comment.