Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About visualization on val #14

Closed
ymlzOvO opened this issue Mar 26, 2024 · 11 comments
Closed

About visualization on val #14

ymlzOvO opened this issue Mar 26, 2024 · 11 comments

Comments

@ymlzOvO
Copy link

ymlzOvO commented Mar 26, 2024

Hi, I'm trying to reproduce your codes, and it does run successfully with:
python test.py "configs/frnet/frnet-semantickitti_seg.py" "pretrained/frnet-semantickitti_seg.pth"
but when I want to visualize the results using follow command:
python test.py "configs/frnet/frnet-semantickitti_seg.py" "pretrained/frnet-semantickitti_seg.pth" --show --show-dir "show_dirs" --task "lidar_seg"
it turns to be
AssertionError: 'data_sample' must contain 'img_path' or 'lidar_path'
So how do you do to visualize just like what is showed in the project page, thank you! I am not familiar with mmcv, and just tried the command in its document.

@Xiangxu-0103
Copy link
Owner

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.

Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@ymlzOvO
Copy link
Author

ymlzOvO commented Mar 27, 2024

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.

Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

@Wansit99
Copy link

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

@ymlzOvO
Copy link
Author

ymlzOvO commented Mar 28, 2024

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

@Wansit99
Copy link

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you!

@ymlzOvO
Copy link
Author

ymlzOvO commented Mar 28, 2024

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you!

In the config file frnet-semantickitti_seg.py, there's a line to assign test_evaluator = dict(type='SegMetric'), you should register a new one like in #13 , and use it instead of the default one.

@Wansit99
Copy link

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you!

In the config file frnet-semantickitti_seg.py, there's a line to assign test_evaluator = dict(type='SegMetric'), you should register a new one like in #13 , and use it instead of the default one.

Thanks!I will try it

@Wansit99
Copy link

Wansit99 commented Mar 28, 2024

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you!

In the config file frnet-semantickitti_seg.py, there's a line to assign test_evaluator = dict(type='SegMetric'), you should register a new one like in #13 , and use it instead of the default one.

But when I try it, it looks that, how can i solve it? Thanks!

#map_inv = self.dataset_meta['learning_map_inv'] #inv mapping
#KeyError: 'learning_map_inv'

@ymlzOvO
Copy link
Author

ymlzOvO commented Mar 28, 2024

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you!

In the config file frnet-semantickitti_seg.py, there's a line to assign test_evaluator = dict(type='SegMetric'), you should register a new one like in #13 , and use it instead of the default one.

But when I try it, it looks that, how can i solve it? Thanks!

#map_inv = self.dataset_meta['learning_map_inv'] #inv mapping #KeyError: 'learning_map_inv'

use 'label_mapping' instead

@Wansit99
Copy link

Hi @ymlzOvO, you can save the predictions offline for visualization. We visualize the results along BEV and range image, which has not been fully supported in mmdet3d.
Moreover, we will provide a script to visualize the results in the future (due to busy recently).

@Xiangxu-0103 Thank you! I've finished saving the results as .label just like ground truth. Also, I use Det3DLocalVisualizer() to display in a static way. Could you tell me more details about realizing your demo video using BEV and range image...what library, documents, or codebase should I refer to? Thanks again!

Hello! I've also been working on visualizing results recently. May I share your code for saving the results? Thank you!

Hi, I just take inference from #13 , there's the codes you need. For a static display, I use the template given in mmdet3d, but i don't how to make it be dynamic...

import numpy as np

from mmdet3d.visualization import Det3DLocalVisualizer

points = np.fromfile('demo/data/sunrgbd/000017.bin', dtype=np.float32)
points = points.reshape(-1, 3)
visualizer = Det3DLocalVisualizer()
mask = np.random.rand(points.shape[0], 3)
points_with_mask = np.concatenate((points, mask), axis=-1)
visualizer.set_points(points, pcd_mode=2, vis_mode='add')
visualizer.draw_seg_mask(points_with_mask)
visualizer.show()

Thank you for your response. However, I am still not quite clear on how to save the results from a network as a .label file. Could you provide more detailed information? Thank you!

In the config file frnet-semantickitti_seg.py, there's a line to assign test_evaluator = dict(type='SegMetric'), you should register a new one like in #13 , and use it instead of the default one.

But when I try it, it looks that, how can i solve it? Thanks!
#map_inv = self.dataset_meta['learning_map_inv'] #inv mapping #KeyError: 'learning_map_inv'

use 'label_mapping' instead

Thank you for sharing, I have successfully saved the results. May I add your personal contact information? I am also currently researching FRNet, can we communicate and discuss together!

@ymlzOvO ymlzOvO closed this as completed Mar 28, 2024
@xiaosa269
Copy link

Hello,

Thank you very much for your contribution to the visualization of Val. I have now saved the .label files generated by the test. However, when I use the semantic-kitti-api(https://github.com/PRBonn/semantic-kitti-api) for visualization,

 python  ./visualize.py --sequence 11 --dataset /data/semantickitti_frnet/dataset --predictions /data/semantickitti_frnet/dataset

it shows that the number of labels does not match the number of points.
image

Have you encountered such an issue? If so, how did you resolve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants