-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About visualization on val #14
Comments
Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along Moreover, we will provide a script to visualize the results in the future (due to busy recently). |
@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use |
Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you! |
Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic... import numpy as np
from mmdet3d.visualization import Det3DLocalVisualizer
points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show() |
Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you! |
In the config file |
Thanks!I will try it |
But when I try it, it looks that, how can i solve it? Thanks! #map_inv = self.dataset_meta['learning_map_inv'] #inv mapping |
use |
Thank you for sharing, I have successfully saved the results. May I add your personal contact information? I am also currently researching FRNet, can we communicate and discuss together! |
Hello, Thank you very much for your contribution to the visualization of Val. I have now saved the .label files generated by the test. However, when I use the semantic-kitti-api(https://github.com/PRBonn/semantic-kitti-api) for visualization, python ./visualize.py --sequence 11 --dataset /data/semantickitti_frnet/dataset --predictions /data/semantickitti_frnet/dataset it shows that the number of labels does not match the number of points. Have you encountered such an issue? If so, how did you resolve it? |
Hi, I'm trying to reproduce your codes, and it does run successfully with:
python test.py "configs/frnet/frnet-semantickitti_seg.py" "pretrained/frnet-semantickitti_seg.pth"
but when I want to visualize the results using follow command:
python test.py "configs/frnet/frnet-semantickitti_seg.py" "pretrained/frnet-semantickitti_seg.pth" --show --show-dir "show_dirs" --task "lidar_seg"
it turns to be
AssertionError: 'data_sample' must contain 'img_path' or 'lidar_path'
So how do you do to visualize just like what is showed in the project page, thank you! I am not familiar with mmcv, and just tried the command in its document.
The text was updated successfully, but these errors were encountered: