Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse voxel for pointset alongside features #182

Closed
Sentient07 opened this issue Nov 5, 2021 · 7 comments
Closed

Sparse voxel for pointset alongside features #182

Sentient07 opened this issue Nov 5, 2021 · 7 comments
Labels

Comments

@Sentient07
Copy link

Hello.

I have a (B, N, 3) pointsets and (B, N, 1) pointwise features which I wish to voxelize. The goal of this voxelisation is to be able to query from the voxel, (approximated) feature at any given point with minimal computation time. For example, given a new point and I wish to compute its feature, with a voxel representation, I can interpolate from neighbouring nodes of the given point, which is easy to compute knowing the edge information. Alternatively, I can use KNN , but when N is large (typically 1million), this operation is quite slow.

I can construct my voxels from pointset using torch.histogram, then what is a better choice to fill those voxels with my features? I tried to use torch_sparse.coalesce but when I query a known location, the features do not match. Can someone please help me out?

Thank you.

@rusty1s
Copy link
Owner

rusty1s commented Nov 5, 2021

Given that you know which points are inside your voxels (e.g, based on a given index for each point), you can utilize torch-scatter to convert individual point features to voxel features, e.g.:

scatter_mean(x, index, dim=0, dim_size=num_voxels)

Hope this helps!

@Sentient07
Copy link
Author

Hello @rusty1s thanks for your very prompt response. Sorry I didn't follow what you refer to as index in your example?

Is this the following?

sample = np.random.randn(100, 3)
feat = np.random.randn(100,)
H, edges = np.histogramdd(sample, bins=31, range=None)
index = np.nonzero(H)
out = torch.zeros((32, 32, 32))
out = scatter_mean(feat, index, dim=0, dim_size=32, out=out)

@Sentient07
Copy link
Author

@rusty1s I think i have a decent understanding of your point, but I'm facing this error with using scatter_mean

image

The traceback

RuntimeError                              Traceback (most recent call last)
<ipython-input-15-99b837f548a5> in <module>
     10 
     11 print(indices.shape, point_feat.shape)
---> 12 scatter_mean(point_feat, indices, dim=0, dim_size=128)

/anaconda/anaconda3/envs/ramana/lib/python3.8/site-packages/torch_scatter/scatter.py in scatter_mean(src, index, dim, out, dim_size)
     39                  out: Optional[torch.Tensor] = None,
     40                  dim_size: Optional[int] = None) -> torch.Tensor:
---> 41     out = scatter_sum(src, index, dim, out, dim_size)
     42     dim_size = out.size(dim)
     43 

/anaconda/anaconda3/envs/ramana/lib/python3.8/site-packages/torch_scatter/scatter.py in scatter_sum(src, index, dim, out, dim_size)
      9                 out: Optional[torch.Tensor] = None,
     10                 dim_size: Optional[int] = None) -> torch.Tensor:
---> 11     index = broadcast(index, src, dim)
     12     if out is None:
     13         size = list(src.size())

/anaconda/anaconda3/envs/ramana/lib/python3.8/site-packages/torch_scatter/utils.py in broadcast(src, other, dim)
     10     for _ in range(src.dim(), other.dim()):
     11         src = src.unsqueeze(-1)
---> 12     src = src.expand_as(other)
     13     return src

RuntimeError: The expanded size of the tensor (1) must match the existing size (3) at non-singleton dimension 2.  Target sizes: [2, 400000, 1].  Tensor sizes: [2, 400000, 3]

I want the scattering (as mean) to take place over a 3D voxel whose indices are in indices variable. something like below. Can you please help me figure where I"m going wrong? Many thanks!

voxel = torch.zeros(128,128,128)
voxel[indices] = point_feat  # a mean and not assignment

@rusty1s
Copy link
Owner

rusty1s commented Nov 8, 2021

The issue is that your current index is 3-dimensional, while you can only aggregate across a single dimension. Luckily, the torch-cluster package already contains a function to compute the one-dimensional cluster index for a set of points:

import torch
from torch_scatter import scatter_mean
from torch_cluster import grid_cluster

sample = torch.randn(100, 3)

cluster = grid_cluster(sample, size=torch.tensor([0.5, 0.5, 0.5]))

clustered_pos = scatter_mean(sample, cluster, dim=0)

@Sentient07
Copy link
Author

@rusty1s Thank you very much! I've a follow-up question. In addition to scatter_mean , I would also like to interpolate values at the grid where scatter_mean was not applied. How do I go about?

min_coord = torch.min(points, dim=1)[0]
max_coord = torch.max(points, dim=1)[0]
voxel_size = 128

voxel_points = (points-min_coord.unsqueeze(1))*(voxel_size/(max_coord-min_coord+1e-4)).unsqueeze(1)
indices = torch.floor(voxel_points).long()

feat_voxels = torch.zeros((2, voxel_size, voxel_size, voxel_size)).cuda().view(2, -1)
scaled_ind = (indices*torch.LongTensor([voxel_size*voxel_size, voxel_size, 1]).cuda().unsqueeze(0).unsqueeze(0)).sum(dim=-1)

feat_voxels = scatter_mean(point_feat.squeeze(), scaled_ind, out=feat_voxels).view(2, voxel_size,voxel_size,voxel_size)

In this example, feat_voxels contains 0s at the location where scatter_mean was not applied. I was wondering if at those locations values could be interpolated based on existing values applied by scatter_mean. Thank you!

@rusty1s
Copy link
Owner

rusty1s commented Nov 11, 2021

Mh, one way I could think of is to use a hierarchical version based on larger grid sizes, and then you would only replace the voxel information with more fine-grained information in case there exists any points in those voxels.

RexYing pushed a commit to RexYing/pytorch_sparse that referenced this issue Apr 26, 2022
@github-actions
Copy link

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants