Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Obtaining per-point predictions for original point cloud #159

Open
mirceta opened this issue Oct 16, 2019 · 10 comments
Open

Obtaining per-point predictions for original point cloud #159

mirceta opened this issue Oct 16, 2019 · 10 comments

Comments

@mirceta
Copy link

mirceta commented Oct 16, 2019

Hello,

I've used your method with a custom dataset, posed basically as a binary classification problem. I am able to obtain the confusion matrix, but not an actual array of predictions for each point in the original pointcloud. Is this possible?

@loicland
Copy link
Owner

Hi,

you can access the confusion matrix after evaluation with

print(confusion_matrix.confusion_matrix)

@mirceta
Copy link
Author

mirceta commented Oct 16, 2019

Hi,

Thanks for the quick response. Yes, this is what I'm currently using. My question is, is it possible to obtain the prediction for every individual point, not the accumulated confusion matrix. The reason why I'm asking is because I want to visualize the results, I want to see where the errors were made, and I can't do it using just the confusion matrix. Is this possible?

@loicland
Copy link
Owner

loicland commented Oct 17, 2019

You can use the function visualize as described in the readme.

@mirceta
Copy link
Author

mirceta commented Oct 18, 2019

Will check it, will certainly be better than only a confusion matrix. Just to clarify, there is then no way to actually export the per point predictions? Visualizing is nice, but I can't really compare it to the results of another algorithm and compare what each method is better at. For that, I'd need to import the predictions into my own tool for visualization.

@loicland
Copy link
Owner

Hi,

you can access to the output of the network through o_cpu, but it actually gives you the result on the subsampled point clouds.

If you want the result on the original point cloud you can upsample the prediction
with the function interpolate_labels_batch. This is done for the semantic3d dataset in the function write_Semantic3d.py. You might have to adapt the folder hierarchy to your own dataset.

@mirceta
Copy link
Author

mirceta commented Oct 18, 2019

Thank you, this is exactly what I needed. Will let you know how it goes.

@loicland
Copy link
Owner

of note, I highly do not recommend to write the output in a csv file, as it is very slow compared to a h5 file for example. this is only done to comply with semantic3d's submission requirements.

@mirceta
Copy link
Author

mirceta commented Oct 18, 2019

Noted

@mirceta
Copy link
Author

mirceta commented Oct 26, 2019

Hi, I think I've gotten stuck a bit. There's 2 places where there's an o_cpu variable. It's in the training loop and the eval_final. In both cases o_cpu has only 64 elements. This seems very weird? I thought you'd get predictions for the downsampled point cloud. What am I missing?

@mirceta
Copy link
Author

mirceta commented Oct 26, 2019

Actually I see in the way the confusion matrix is computed, that these are actually all of the points but partitioned into 64 batches, and each point in the batch is assigned the same label. Is this an error? I'm a bit confused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants