Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument. #10

Open
ZoranDai opened this issue Jul 12, 2024 · 1 comment

Comments

@ZoranDai
Copy link

请问您有遇到过类似的问题吗?
input:
/mnt/data/Lightning-NeRF$ ns-train lightning_nerf --mixed-precision True --pipeline.model.point-cloud-path /mnt/data/Lightning-NeRF/argo/nerf_data/2aea7bd1-432a-43c5-9445-651102487f65/pcd.ply --pipeline.model.frontal-axis x --pipeline.model.init-density-value 10.0 --pipeline.model.density-grid-base-res 256 --pipeline.model.density-log2-hashmap-size 24 --pipeline.model.bg-density-grid-res 32 --pipeline.model.bg-density-log2-hashmap-size 18 --pipeline.model.near-plane 0.01 --pipeline.model.far-plane 10.0 --pipeline.model.vi-mlp-num-layers 3 --pipeline.model.vi-mlp-hidden-size 64 --pipeline.model.vd-mlp-num-layers 2 --pipeline.model.vd-mlp-hidden-size 32 --pipeline.model.color-grid-base-res 128 --pipeline.model.color-grid-max-res 2048 --pipeline.model.color-grid-fpl 2 --pipeline.model.color-grid-num-levels 8 --pipeline.model.bg-color-grid-base-res 32 --pipeline.model.bg-color-grid-max-res 128 --pipeline.model.bg-color-log2-hashmap-size 16 --pipeline.model.alpha-thre 0.02 --pipeline.model.occ-grid-base-res 256 --pipeline.model.occ-grid-num-levels 4 --pipeline.model.occ-num-samples-per-ray 750 --pipeline.model.occ-grid-update-warmup-step 2 --pipeline.model.pdf-num-samples-per-ray 8 --pipeline.model.pdf-samples-warmup-step 1000 --pipeline.model.pdf-samples-fixed-step 3000 --pipeline.model.pdf-samples-fixed-ratio 0.5 --pipeline.datamanager.train-num-images-to-sample-from 128 --pipeline.datamanager.train-num-times-to-repeat-images 256 --pipeline.model.appearance-embedding-dim 0 argo-data --data /mnt/data/Lightning-NeRF/argo/nerf_data/2aea7bd1-432a-43c5-9445-651102487f65 --orientation-method none

output
/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/tyro/_fields.py:330: UserWarning: The field optimizer is annotated with type <class 'nerfstudio.engine.optimizers.AdamOptimizerConfig'>, but the default value RAdamOptimizerConfig:
_target: <class 'torch.optim.radam.RAdam'>
lr: 0.0006
eps: 1e-08
max_norm: None
weight_decay: 0.001 has type <class 'nerfstudio.engine.optimizers.RAdamOptimizerConfig'>. We'll try to handle this gracefully, but it may cause unexpected behavior.
warnings.warn(
──────────────────────────────────────────────────────── Config ────────────────────────────────────────────────────────
TrainerConfig(
_target=<class 'nerfstudio.engine.trainer.Trainer'>,
output_dir=PosixPath('outputs'),
method_name='lightning_nerf',
experiment_name=None,
timestamp='2024-07-12_135036',
machine=MachineConfig(seed=42, num_gpus=1, num_machines=1, machine_rank=0, dist_url='auto'),
logging=LoggingConfig(
relative_log_dir=PosixPath('.'),
steps_per_log=10,
max_buffer_size=20,
local_writer=LocalWriterConfig(
_target=<class 'nerfstudio.utils.writer.LocalWriter'>,
enable=True,
stats_to_track=(
<EventName.ITER_TRAIN_TIME: 'Train Iter (time)'>,
<EventName.TRAIN_RAYS_PER_SEC: 'Train Rays / Sec'>,
<EventName.CURR_TEST_PSNR: 'Test PSNR'>,
<EventName.VIS_RAYS_PER_SEC: 'Vis Rays / Sec'>,
<EventName.TEST_RAYS_PER_SEC: 'Test Rays / Sec'>,
<EventName.ETA: 'ETA (time)'>
),
max_log_size=10
),
profiler='basic'
),
viewer=ViewerConfig(
relative_log_filename='viewer_log_filename.txt',
websocket_port=None,
websocket_port_default=7007,
websocket_host='0.0.0.0',
num_rays_per_chunk=32768,
max_num_display_images=512,
quit_on_train_completion=False,
image_format='jpeg',
jpeg_quality=90
),
pipeline=VanillaPipelineConfig(
_target=<class 'nerfstudio.pipelines.base_pipeline.VanillaPipeline'>,
datamanager=VanillaDataManagerConfig(
_target=<class 'nerfstudio.data.datamanagers.base_datamanager.VanillaDataManager'>,
data=None,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='off',
position_noise_std=0.0,
orientation_noise_std=0.0,
optimizer=AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.0006,
eps=1e-15,
max_norm=None,
weight_decay=0
),
scheduler=ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=None,
warmup_steps=0,
max_steps=10000,
ramp='cosine'
),
param_group='camera_opt'
),
dataparser=ArgoDataParserConfig(
_target=<class 'nerfstudio.data.dataparsers.argo_dataparser.Argo'>,
data=PosixPath('/mnt/data/Lightning-NeRF/argo/nerf_data/2aea7bd1-432a-43c5-9445-651102487f65'),
scale_factor=1.0,
scene_scale=1.0,
orientation_method='none',
center_method='poses',
auto_scale_poses=True,
load_depth=False,
depth_unit_scale_factor=1.0
),
train_num_rays_per_batch=65536,
train_num_images_to_sample_from=128,
train_num_times_to_repeat_images=256,
eval_num_rays_per_batch=2048,
eval_num_images_to_sample_from=-1,
eval_num_times_to_repeat_images=-1,
eval_image_indices=(0,),
camera_res_scale_factor=1.0,
patch_size=1
),
model=LightningNeRFModelConfig(
_target=<class 'lightning_nerf.model.LightningNeRFModel'>,
enable_collider=True,
collider_params={'near_plane': 2.0, 'far_plane': 6.0},
loss_coefficients={'rgb_loss': 1.0, 'res_rgb_loss': 0.01},
eval_num_rays_per_chunk=131072,
near_plane=0.01,
far_plane=10.0,
vi_mlp_num_layers=3,
vi_mlp_hidden_size=64,
vd_mlp_num_layers=2,
vd_mlp_hidden_size=32,
appearance_embedding_dim=0,
use_average_appearance_embedding=True,
background_color='random',
alpha_thre=0.02,
cone_angle=0.004,
point_cloud_path=PosixPath('/mnt/data/Lightning-NeRF/argo/nerf_data/2aea7bd1-432a-43c5-9445-651102487f65/pcd
.ply'),
frontal_axis='x',
init_density_value=10.0,
density_grid_base_res=256,
density_log2_hashmap_size=24,
color_grid_base_res=128,
color_grid_max_res=2048,
color_grid_fpl=2,
color_log2_hashmap_size=19,
color_grid_num_levels=8,
bg_density_grid_res=32,
bg_density_log2_hashmap_size=18,
bg_color_grid_base_res=32,
bg_color_grid_max_res=128,
bg_color_log2_hashmap_size=16,
occ_grid_base_res=256,
occ_grid_num_levels=4,
occ_grid_update_warmup_step=2,
occ_num_samples_per_ray=750,
pdf_num_samples_per_ray=8,
pdf_samples_warmup_step=1000,
pdf_samples_fixed_step=3000,
pdf_samples_fixed_ratio=0.5,
rgb_padding=None
)
),
optimizers={
'den_encoder': {
'optimizer': RAdamOptimizerConfig(
_target=<class 'torch.optim.radam.RAdam'>,
lr=1.0,
eps=1e-08,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.01,
warmup_steps=10,
max_steps=10000,
ramp='linear'
)
},
'col_encoder': {
'optimizer': RAdamOptimizerConfig(
_target=<class 'torch.optim.radam.RAdam'>,
lr=1.0,
eps=1e-08,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.01,
warmup_steps=10,
max_steps=10000,
ramp='linear'
)
},
'network': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.01,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.0001,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
}
},
vis='viewer',
data=None,
relative_model_dir=PosixPath('nerfstudio_models'),
steps_per_save=1000,
steps_per_eval_batch=500,
steps_per_eval_image=30000,
steps_per_eval_all_images=30000,
max_num_iterations=30001,
mixed_precision=True,
use_grad_scaler=False,
save_only_latest_checkpoint=True,
load_dir=None,
load_step=None,
load_config=None,
log_gradients=False
)
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[13:50:36] Saving config to: outputs/unnamed/lightning_nerf/2024-07-12_135036/config.yml experiment_config.py:129
[13:50:36] Saving checkpoints to: outputs/unnamed/lightning_nerf/2024-07-12_135036/nerfstudio_models trainer.py:139
[13:50:36] Argoverse 2 dataset loaded with #images 289, #masks 289, #depth 0. argo_dataparser.py:188
[13:50:37] Argoverse 2 dataset loaded with #images 31, #masks 31, #depth 0. argo_dataparser.py:188
Setting up training dataset...
Caching 128 out of 289 images, resampling every 256 iters.
Setting up evaluation dataset...
Caching all 31 images.
[13:50:57] Scene box: model.py:368
[-1.0, -0.30000001192092896, -0.10000000149011612]
[1.5, 0.30000001192092896, 0.4000000059604645].
Density grid size (m): tensor([0.6793, 0.1630, 0.1359]). model.py:372
Max-Res color grid size (m): tensor([0.0849, 0.0204, 0.0170]). model.py:375
[13:50:57] vi mlp input: 16, vd mlp input: 40 field.py:156
Render step size: 0.003492213567097982. model.py:314
[13:50:59] Sampler: LightningNeRFSampler( model.py:329
(occupancy_grid): OccGridEstimator()
).
Collider: 0.01, 10.0. model.py:406
[13:51:00] Load vertices: torch.Size([517136, 3]) model.py:192
tensor(134.6774) tensor(138.0226) model.py:193
tensor(73.5035) tensor(76.4893) model.py:194
[13:51:01] tensor(-0.3844) tensor(-0.1551) model.py:195
[13:51:03] #Point cloud in FG bbox: 0, ratio: 0.0. model.py:431
Augmenting bg points with x as frontal axis. model.py:225
[13:51:06] augment bg points: torch.Size([262144, 3]) model.py:246
[13:51:07] occ grid: model.py:247
tensor([[-1.0000, -0.3000, -0.1000, 1.5000, 0.3000, 0.4000],
[-2.2500, -0.6000, -0.3500, 2.7500, 0.6000, 0.6500],
[-4.7500, -1.2000, -0.8500, 5.2500, 1.2000, 1.1500],
[-9.7500, -2.4000, -1.8500, 10.2500, 2.4000, 2.1500]])
[13:51:13] density encoder: density_encoding.params, torch.Size([16777216]) model.py:463
density encoder: bg_density_encoding.params, torch.Size([262144]) model.py:463
color encoder: color_encoding.params, torch.Size([8388608]) model.py:466
color encoder: bg_color_encoding.params, torch.Size([1048576]) model.py:466
network: vi_mlp.params, torch.Size([6144]) model.py:469
network: direction_encoding.params, torch.Size([0]) model.py:469
network: vd_mlp.params, torch.Size([2048]) model.py:469
╭─────────────────────────────────────────── Viewer ───────────────────────────────────────────╮
│ ╷ │
│ HTTP │ https://viewer.nerf.studio/versions/23-05-01-0/?websocket_url=ws://localhost:7007
│ ╵ │
╰──────────────────────────────────────────────────────────────────────────────────────────────╯
[NOTE] Not running eval iterations since only viewer is enabled.
Use --vis {wandb, tensorboard, viewer+wandb, viewer+tensorboard} to run with eval.
No checkpoints to load, training from scratch
FG vertices: torch.Size([0, 3]) model.py:257
Traceback (most recent call last):
File "/mnt/data/anaconda3/envs/lightning-nerf/bin/ns-train", line 8, in
sys.exit(entrypoint())
File "/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/scripts/train.py", line 247, in entrypoint
main(
File "/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/scripts/train.py", line 233, in main
launch(
File "/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/scripts/train.py", line 172, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/scripts/train.py", line 86, in train_loop
trainer.setup()
File "/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/nerfstudio/engine/trainer.py", line 177, in setup
self.callbacks = self.pipeline.get_training_callbacks(
File "/home/bo1-dai/anaconda3/envs/lightning-nerf/lib/python3.8/site-packages/nerfstudio/pipelines/base_pipeline.py", line 397, in get_training_callbacks
model_callbacks = self.model.get_training_callbacks(training_callback_attributes)
File "/mnt/data/Lightning-NeRF/lightning_nerf/model.py", line 508, in get_training_callbacks
self._pretrain_density_grid()
File "/mnt/data/Lightning-NeRF/lightning_nerf/model.py", line 258, in _pretrain_density_grid
CONSOLE.log(vertices_fg[:,0].min(), vertices_fg[:,0].max())
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.

@XJay18
Copy link
Collaborator

XJay18 commented Jul 16, 2024

Hi, it seems that the loaded point cloud data is not the provided one. From the above logs, it is found

...
point_cloud_path=PosixPath('/mnt/data/Lightning-NeRF/argo/nerf_data/2aea7bd1-432a-43c5-9445-651102487f65/pcd
.ply'),
...
[13:51:00] Load vertices: torch.Size([517136, 3]) model.py:192
tensor(134.6774) tensor(138.0226) model.py:193
tensor(73.5035) tensor(76.4893) model.py:194
[13:51:01] tensor(-0.3844) tensor(-0.1551) model.py:195
[13:51:03] #Point cloud in FG bbox: 0, ratio: 0.0. model.py:431
...

This means that there are no points in the foreground box after normalization. However, if you use the provided point cloud file, i.e., the file named pcd_clr_0.05.ply (#points should be 3,845,072), you should get the correct output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants