Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about farthest point sampling #83

Closed
amiltonwong opened this issue Dec 17, 2019 · 8 comments
Closed

about farthest point sampling #83

amiltonwong opened this issue Dec 17, 2019 · 8 comments

Comments

@amiltonwong
Copy link

Hi, authors,

The Farthest Point Sampling (FPS) is applied in the official implementation of pointnet2. However, I could not find FPS implementation in your package? Does your package provide FPS function?

THX!

@Leerw
Copy link

Leerw commented Aug 13, 2020

Hi,
@erikwijmans
I met a wrong result after applying FPS,
the data I used is ShapeNetCore.v1/02691156/52a1b6e8177805cf53a728ba6e36dfae/model.obj,
after sampling this model and apply fps, there is a hole in the point cloud.
Please check it.

@erikwijmans
Copy link
Owner

Can you please provide screenshots or something? Also note that FPS algorithm used in PointNet++ is a) approximate (point zero is always included in the output set) and b) greedy, meaning that it is completely liable to make some errors.

@Leerw
Copy link

Leerw commented Aug 14, 2020

test_data.zip
pn2_op_test

You can get the test data from this .zip file, including a mesh model.obj, a point cloud model.pcd.
I sampled 16384 points on this mesh to get a point cloud, afterwards, I apply your fps method, and get a point cloud with a hole in the middle of the airplane, as showed in this picture.

test code:

import numpy as np
import open3d as o3d
import torch
from pointnet2_ops.pointnet2_utils import furthest_point_sample, gather_operation

resample = lambda points, n: gather_operation(points.transpose(1 , 2).contiguous(), furthest_point_sample(points, n))


pcd = o3d.io.read_point_cloud("./ShapeNetCore.v1/02691156/52a1b6e8177805cf53a728ba6e36dfae/model.pcd")

points = torch.from_numpy(np.asarray(pcd.points).astype(np.float32)).cuda().unsqueeze(0)
print(points.shape, points.dtype)
points = resample(points, 5000)

def vis(points):
    points = points.transpose(1, 2).contiguous().cpu().numpy()[0]
    pcd = o3d.geometry.PointCloud()
    pcd.points = o3d.utility.Vector3dVector(points)
    o3d.visualization.draw_geometries([pcd])

vis(points)

@erikwijmans
Copy link
Owner

Can you try removing these two lines: https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu#L100-L101

I filtered out near-zero points to avoid some numerical instabilities, but, if a model is very small, that may cause issue.

@Leerw
Copy link

Leerw commented Aug 14, 2020

Can you try removing these two lines: https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu#L100-L101

I filtered out near-zero points to avoid some numerical instabilities, but, if a model is very small, that may cause issue.

Thanks, this issue fixed.

@erikwijmans
Copy link
Owner

Sounds good. I will consider removing that or making the limit tighter. IIRC, the instability was due to my specific down-stream application at the time.

@pixar0407
Copy link

Can you please provide screenshots or something? Also note that FPS algorithm used in PointNet++ is a) approximate (point zero is always included in the output set) and b) greedy, meaning that it is completely liable to make some errors.

hi, @erikwijmans

Thank you for the quality code. I have few questions.

  1. you mentioned 'FPS algorithm used in PointNet++'. So you mean that the algorithm who have implemented here(https://github.com/erikwijmans/Pointnet2_PyTorch/blob/master/pointnet2_ops_lib/pointnet2_ops/_ext-src/src/sampling_gpu.cu) also has those two a), b) properties?

  2. And can you elaborate the meaning of 'greedy'?

  3. Efficiency of your FPS algorithm
    i have tried two different version of FPS algorithm, yours and this one pytorch_cluster.

And yours seems pretty fast. (same condition, 1024 points to 512 points, etc)
Can you tell me any reason why your code performs better?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants