You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I plugged the DGCNN model into my semantic segmentation framework in which I use other models like PointNet or PointNet++ without problems. At training time everything is fine and I get pretty good accuracies for my Airborne LiDAR data (here I randomly sample 8192 points for each tile so everything is good). However at test time I want to predict all points inside one tile and I get a memory error for a tile with more than 50000 points.
Aborted (core dumped)
I guess the problem is in the pairwise_distance function. This function calculates a adjacency matrix and I think my gpu memory cant handle an array with the shape of 50000 x 50000. I understand that the tf.matmul function is very fast on gpu but I would like to try a workaround which purely calculates the k nearest neighbors without this huge memory overhead. Is there anything like this? I know how to use a KDTree in normal python but I have not found a way yet to use it with tensorflow placeholders...
I know I can work around this problem by using smaller tiles or downsample my point clouds but I really would like to fix this internally...
I list some basic information about my implementation here:
python 3.6
tensorflow 1.15
batch size is already 1 at test time
Thanks in advance for any tips!
The text was updated successfully, but these errors were encountered:
I am having a similar issue. Were you able to find a solution ? I would appreciate it if you share it here if you found a way to combat this. Thanks in advance.
I plugged the DGCNN model into my semantic segmentation framework in which I use other models like PointNet or PointNet++ without problems. At training time everything is fine and I get pretty good accuracies for my Airborne LiDAR data (here I randomly sample 8192 points for each tile so everything is good). However at test time I want to predict all points inside one tile and I get a memory error for a tile with more than 50000 points.
Aborted (core dumped)
I guess the problem is in the
pairwise_distance
function. This function calculates a adjacency matrix and I think my gpu memory cant handle an array with the shape of 50000 x 50000. I understand that thetf.matmul
function is very fast on gpu but I would like to try a workaround which purely calculates the k nearest neighbors without this huge memory overhead. Is there anything like this? I know how to use a KDTree in normal python but I have not found a way yet to use it with tensorflow placeholders...I know I can work around this problem by using smaller tiles or downsample my point clouds but I really would like to fix this internally...
I list some basic information about my implementation here:
Thanks in advance for any tips!
The text was updated successfully, but these errors were encountered: