You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I'm finding the "point cloud" lidar visualizer very helpful, but loading real point clouds in it is quite laggy and slow, to the point that it starts to crash my browser (firefox). I'm already downsampling each tile to ~10,000 points, and I can't go much lower without losing info. It seems that after you close the "fullscreen" point cloud view, the point clouds are still loaded into memory, so the website eventually begins to slow and the tab must be closed.
I suspect part of the issue is the point cloud being loaded as a json file, which also causes the storage to explode on our account. Each sample takes up ~4mb, so over a run of thousands of steps, each run can easily get into hundreds of gigabytes of data. I propose either compressing these samples or storing them in a format more suitable for point cloud rendering, which would have the added benefit of likely speeding up visualization. For example, I could fit 10x more points in a ~4mb las file compared to the 4mb json file.
Additionally, storing these points as JSON files is slow on the client-side too: after doing profiling, the json.dumps method is taking up a significant amount of time from our training. The point cloud will likely already be in numpy format, so having to convert it to a regular python list and then export that to json starts to bog down the training. I suspect writing the point cloud in binary format straight from the numpy array would be significantly faster, using a library like plyfile or laspy.
Additionally, some basic point cloud rendering tools would be very helpful for making the interface more usable, such as being able to change the size of the points that are displayed ("point radius"), so sparse data is easier to see, and intensity/contrast controls on the rgb colors that are displayed. Being able to move the point clouds from within the chart display (not have to go into fullscreen mode) would be a great improvement as well.
Sorry if this is the wrong place to put this, I'd be happy to clarify.
The text was updated successfully, but these errors were encountered:
Hi @CCInc , thanks for the feedback and suggestion. I'm going to ticket these feature request as well as talk with the eng team about some performance optimizations we can make for point cloud visualizations.
@aidanjdonohue Awesome! I'm happy to help if need be, our team works in web point cloud visualization, feel free to run some ideas through this thread.
Describe the bug
I'm finding the "point cloud" lidar visualizer very helpful, but loading real point clouds in it is quite laggy and slow, to the point that it starts to crash my browser (firefox). I'm already downsampling each tile to ~10,000 points, and I can't go much lower without losing info. It seems that after you close the "fullscreen" point cloud view, the point clouds are still loaded into memory, so the website eventually begins to slow and the tab must be closed.
I suspect part of the issue is the point cloud being loaded as a json file, which also causes the storage to explode on our account. Each sample takes up ~4mb, so over a run of thousands of steps, each run can easily get into hundreds of gigabytes of data. I propose either compressing these samples or storing them in a format more suitable for point cloud rendering, which would have the added benefit of likely speeding up visualization. For example, I could fit 10x more points in a ~4mb las file compared to the 4mb json file.
Additionally, storing these points as JSON files is slow on the client-side too: after doing profiling, the json.dumps method is taking up a significant amount of time from our training. The point cloud will likely already be in numpy format, so having to convert it to a regular python list and then export that to json starts to bog down the training. I suspect writing the point cloud in binary format straight from the numpy array would be significantly faster, using a library like
plyfile
orlaspy
.Additionally, some basic point cloud rendering tools would be very helpful for making the interface more usable, such as being able to change the size of the points that are displayed ("point radius"), so sparse data is easier to see, and intensity/contrast controls on the rgb colors that are displayed. Being able to move the point clouds from within the chart display (not have to go into fullscreen mode) would be a great improvement as well.
Sorry if this is the wrong place to put this, I'd be happy to clarify.
The text was updated successfully, but these errors were encountered: