Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INT_MAX error #44

Closed
L-Reichardt opened this issue Jul 17, 2022 · 7 comments
Closed

INT_MAX error #44

L-Reichardt opened this issue Jul 17, 2022 · 7 comments

Comments

@L-Reichardt
Copy link

Hello,

I am encountering the following error, when using the model on larger pointclouds. I have reduced the model size and batch size, however I still get the following error.

index_0 = p2v_map.unsqueeze(-1).expand(-1, -1, k)[mask_mat] #[M, ]
RuntimeError: nonzero is not supported for tensors with more than INT_MAX elements,   file a support request

Do you know what causes this? I havent been able to find anything useful online.

@X-Lai
Copy link
Collaborator

X-Lai commented Jul 17, 2022

It means that the number of query-key pairs is too large. You may consider setting a larger voxel_size.

@L-Reichardt
Copy link
Author

L-Reichardt commented Jul 17, 2022

Thank you for your answer. I thought maybe its caused so something else, because the error I get with larger grids is

 pointops_cuda.attention_step1_forward_cuda_v2(N_k, M, h, C, n_max, q, k, index0_offsets, index1, output)
RuntimeError: Caught an unknown exception!

For features I use an (N, 3) pointcloud, and also the same for Coordinates.

My model settings are :

        dataset_dict = {
            'stem_transformer': True,
            'use_xyz': True,
            'sync_bn': False,
            'rel_query': True,
            'rel_key': True,
            'rel_value': True,
            'quant_size': 1.0, # 0.01
            'downsample_scale': 8,                          # Scale for Stratified window
            'num_layers': 1, # 2
            'patch_size': 1,
            'window_size': 8,
            'depths': [1, 1, 1, 1], # [2, 2, 6, 2]
            'channels': [32, 32, 32, 64],  # [48, 96, 192, 384]
            'num_heads': [4, 8, 16, 32], # [3, 6, 12, 24]
            'up_k': 3,
            'drop_path_rate': 0.3,
            'concat_xyz': False,
            'grid_size': 4.0, # 0.04
            'max_batch_points': 200000,
            'max_num_neighbors': 34, # 34
            'ratio': 0.25,                                  # Downsample ratio
            'k': 16, # 16
            'sigma': 1.0
            }

Basically I reduced the model to be small because the pointclouds I work with are so large I rather just get it running in a "minimum" configuration, as an initial step.

@L-Reichardt
Copy link
Author

Update : Error disappears with 'stem_transformer' = False, so I guess there is an issue there

@X-Lai
Copy link
Collaborator

X-Lai commented Jul 18, 2022

The error RuntimeError: Caught an unknown exception! is caused because you set the channel number per head to 8. Here we only support 16 or 32.

@L-Reichardt
Copy link
Author

@X-Lai Thank you, that solved the error. However now I get the n_max error. Do you have an overview how/what parameters to set? The parameters are a little in-transparent to me, so far I have not been able to use the model with external pointclouds or random tensors.

@X-Lai
Copy link
Collaborator

X-Lai commented Jul 22, 2022

The parameters like voxel_size quant_size grid_size max_batch_points can be very important.

@L-Reichardt
Copy link
Author

L-Reichardt commented Jul 26, 2022

@X-Lai How exactly does quant_size relate to the voxels and grid sizes? What are the downsides of increasing the value?

I've been needing to use a quant size >= 0.1 to prevent the n_max error. Do I understand correctly that quant affects the samping inside the transformer windows?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants