Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to recurrent the effect in paper? #2

Closed
qpc001 opened this issue Jun 11, 2023 · 5 comments
Closed

How to recurrent the effect in paper? #2

qpc001 opened this issue Jun 11, 2023 · 5 comments

Comments

@qpc001
Copy link

qpc001 commented Jun 11, 2023

I am trying to use this method for carla HD maps(Town01), but I got the result is:

image

The result is contained lots of cracks.

I used the script recons_waymo.py to generate it.

import nksr
import torch

from pycg import vis, exp
from pathlib import Path
import numpy as np
from common import load_waymo_example, warning_on_low_memory


if __name__ == '__main__':
    warning_on_low_memory(20000.0)
    xyz_np, sensor_np = load_waymo_example()

    device = torch.device("cuda:0")
    reconstructor = nksr.Reconstructor(device)
    reconstructor.chunk_tmp_device = torch.device("cpu")

    input_xyz = torch.from_numpy(xyz_np).float().to(device)
    input_sensor = torch.from_numpy(sensor_np).float().to(device)

    field = reconstructor.reconstruct(
        input_xyz, sensor=input_sensor, detail_level=None,
        # Minor configs for better efficiency (not necessary)
        voxel_size=0.1,
        approx_kernel_grad=True, solver_tol=1e-4, fused_mode=True, 
        # Chunked reconstruction (if OOM)
        # chunk_size=51.2,
        preprocess_fn=nksr.get_estimate_normal_preprocess_fn(64, 200.0)
    )
    mesh = field.extract_dual_mesh(mise_iter=1)
    mesh = vis.mesh(mesh.v, mesh.f)

    vis.show_3d([mesh], [vis.pointcloud(xyz_np)])

@heiwang1997
Copy link
Collaborator

Hi, thanks for your interest in our paper! I guess the main reason is the wrongly estimated normal originated from the wrong sensor positions. Can you please try to visualize your sensor positions?

Alternatively, you can download our official CARLA dataset here and see if the problem persists.

@qpc001
Copy link
Author

qpc001 commented Jun 11, 2023

Hi, thanks for your interest in our paper! I guess the main reason is the wrongly estimated normal originated from the wrong sensor positions. Can you please try to visualize your sensor positions?

Alternatively, you can download our official CARLA dataset here and see if the problem persists.

I use the sensor position at [0,0,0] for recons_waymo.py.

And I try to use the script recons_simple.py , but got similar result. (The normal is calculated by CloudCompare.)

@heiwang1997
Copy link
Collaborator

Ah, I see the reason :)

Sensor position refers to the position of the sensor that captures this point, and it could be different for each point. Usually, you could use the positions of your vehicle to approximate such positions, instead of [0, 0, 0].

The normals computed from cloud-compare suffer from similar problems in that your normal orientations are not consistent, i.e., some normals on the road are pointing up, while others are pointing down.

Hence, two solutions:

  1. When you generate your CARLA dataset, record the sensor position for each point, don't use [0,0,0] :)
  2. Use our provided dataset, we did all that for you.

Best.

@qpc001
Copy link
Author

qpc001 commented Jun 11, 2023

Thanks A lot.

@qpc001 qpc001 closed this as completed Jun 11, 2023
@rockywind
Copy link

How to save the mesh as color image like below?
image
I save the mesh and show it in MeshLab.
It looks below.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants