You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's because conv2d is used, so we need to add an extra 'dummy' dimension, as we only want to stride across one dimension.
An alternative could be to use conv1d but this does the same thing anyway (uses conv2d) and would mean tf would shape back and forth pointlessly in between convolutions.
it's just a choice of implementation. You can also use conv1d (input will be 3D) or just batch matrix multiplication (input will be 2D) with a few more transformations.
In pointnet_cls.py why have you expanded the dimension of the transformed input point cloud?
` def get_model(point_cloud, is_training, bn_decay=None):
""" Classification PointNet, input is BxNx3, output Bx40 """
batch_size = point_cloud.get_shape()[0].value
num_point = point_cloud.get_shape()[1].value
end_points = {}
The text was updated successfully, but these errors were encountered: