Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eval on a custom point cloud #3

Closed
KevinCain opened this issue Mar 19, 2022 · 6 comments
Closed

Eval on a custom point cloud #3

KevinCain opened this issue Mar 19, 2022 · 6 comments

Comments

@KevinCain
Copy link

I'm attempting to process one of the provided point clouds into a mesh via dgnn as a sanity check before trying my own numpy output. The features seem fine (see log below), however run.py gives an error (also shown below). My steps follow:

  • Set data path in configs/custom.yaml, e.g.: data: /home/kevin/dgnn/data/sample
  • Extract features with precompiled feat build provided:
    ./utils/feat -w /home/kevin/dgnn/data/sample/ -i anchor_0 -s npz
  • Attempt to create mesh:
    python run.py -i -c configs/custom.yaml
    The error returned is:
Traceback (most recent call last):
File "/home/kevin/anaconda3/envs/dgnn/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'metrics'

The feature extraction log follows:

-----FEATURE EXTRACTION-----

Working dir set to:
-/home/kevin/dgnn/data/sample/

Read NPZ points...
-from anchor_0
-3110 points read
-with sensor
-with normal

Export points...
-to anchor_0.ply
-3110 points

Delaunay triangulation...
-3110 input points
-3110 output points
-39342 finite facets
-19520(+604) cells
-done in 0s

d parameter of Labatu set to -1

Consider turning on scaling with --sn, if your learning data is not yet scaledto a unit cube!

Start learning ray tracing...
-Trace 1 ray(s) to every point
-Ray tracing done in 0s

Export graph...
-to gt/anchor_0_X.npz
-Exported 20124 nodes and 80496 edges
-in 1s

Export 3DT...
-to gt/anchor_0_3dt.npz
-3110 vertices
-39342 facets
-19520 tetrahedra

Export interface...
-to anchor_0_initial.ply
-Before pruning (points, facets): (3110, 39342)
-After pruning (points, facets): (304, 604)
-done in 0s

-----FEATURE EXTRACTION FINISHED in 1s -----
@KevinCain
Copy link
Author

A quick note: as per dgnn's environment requirements I have munch (2.5.0):
/home/kevin/anaconda3/envs/dgnn/lib/python3.9/site-packages
Python finds module munch as /home/kevin/anaconda3/envs/dgnn/lib/python3.9/site-packages/munch/__init__.py

@KevinCain
Copy link
Author

KevinCain commented Mar 19, 2022

I converted the supplied .ply data to .npz rather than using the .npz files above, in case the normal and viewpoint data was not present:

cd ~/dgnn
python ./processing/reconbench/ply2npz.py --user_dir=/home/kevin/dgnn/data

This outputs files as expected, and the feature extraction from the output looks sane, similar to what I logged above.

However, I get the same error noted above when calling run.py for evaluation.

@raphaelsulzer
Copy link
Owner

Hi, the custom.yaml file was not really up-to-date.

I think you are simply missing this line in it.

@KevinCain
Copy link
Author

KevinCain commented Mar 22, 2022

Thanks @raphaelsulzer, I did a hard reset via git and pulled the latest updates, which included the line you noted. I'm able to proceed, but quickly found that I do not understand how to set the inference list in the case of single point cloud.

As above in this thread, I converted a single point cloud via ply2npz, which I placed in the ‘0’ folder expected in ground truth data, similar to the five model group (0..4) from 'reconbench'. I see run.py still expects a batch of five point models.

Consider this invocation: python run.py -i -c configs/custom.yaml, where the .yaml inference settings include:

inference:
  dataset: sample
  classes: anchor

Based on the code, it looks like the above expects the following directory structure:

...data/
├─ sample/
│ ├─ anchor_0.ply
│ ├─ anchor_0.npz
│ ├─ anchor_0_initial.ply
│ ├─ gt/
│ ├─ ├─0/
│ ├─ ├─├─ anchor_0_labels.npz
│ ├─ ├─├─ anchor_0_* ...

Do I have the .yaml or folder structure wrong?

How can I instruct dgnn to process the single file group above? For training and validation, the analogue seems to be scan_confs: x.

@raphaelsulzer
Copy link
Owner

You can structure your dataset any way you want as long as this dictionary is correctly filled. I recommend you to use a python debugger and see if all the paths are correct.

Also your example should probably look like this:

inference:
  dataset: sample
  classes: ["anchor"]

@KevinCain
Copy link
Author

Thanks once again for a fast, and kind, response. I had mistakenly switched label data on; once I ran with has_label: 0 I was able to process a single input file as expected. I added a block in dataset.py to handle inference from my dataset 'sample' shown below:

READ CONFIG FROM  /home/kevin/dgnn/configs/custom.yaml
SAVE CONFIG TO  /home/kevin/dgnn/data/models/kf96/config.yaml
dataset: sample
class:  ['anchor']
Point clouds per class:  0
path:  /home/kevin/dgnn/data/sample
filename:  anchor_0
scan_conf:  0
gtfile:  gt/0/anchor_0

Thanks again @raphaelsulzer for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants