-
Notifications
You must be signed in to change notification settings - Fork 112
Description
Hello there @drprojects, @rjanvier, @loicland, @CharlesGaydon ! Its very nice to see a very well documented, state-of-art architecture which is user-friendly when it comes to setting up and running. Thanks for your work on the Superpoint Transformer.
We (@pyarelalchauhan, @xbais) are trying to train the architecture on a custom dataset collected in India. We have prepared the dataset as Binary PLY files similar to those in the DALES Object dataset (please see the header of one of our files attached below):
We have generated the relevant Configuration files and other Python files for our dataset taking inspiration from similar files available for DALES and S3DIS datasets provided in your repository. Some of the changes we have made according to our dataset are in these directories:
/configs/datamodule
: added our custom YAML fileconfigs/experiment
: added relevant YAML files for our dataset/data/
: addedcustom_data/raw/train
andcustom_data/raw/test
/src/datamodules
: added relevant Python file for our dataset./src/datasets/
: added relevantcustom-data.py
andcustom-data_config.py
files
We have read the posts #32 (related to RANSAC), #36 (in which you talk about the parameters voxel
, knn
, knn_r
, pcp_regularization
, pcp_spatial_weight
, pcp_cutoff
). But we are still facing issues. It will be greate if you can help us out here!!
👉 Regarding Errors and Warnings
We are getting the following errors and warnings which we are unable to resolve at the moment :
- Warning in Sckit-Learn Regression :
- NAG Related Issue :
Cannot compute radius-based horizontal graph
:
- ValueError
min_samples
may not be larger than number of samples: n_samples = 2 :
(Following your advice on RANSAC Error on Custom Dataset #32 , we have already removed "elevation" frompartition_hf
andpoint_hf
, but still could not get the training to start. - Torch.cat() : expected a non-empty list of Tensors
👉 Regarding Understanding the Configuration
Could you also explain the significance of the value pcp_regularization
, pcp_spatial_weight
and pcp_cutoff
parameters in the /configs/datamodule/custom_data.yaml
file.
We are currently using the following configuration values :
We have tried tweaking these, but cannot get beyond the processing stage for our dataset. Tweaking these params gives one or more of the above mentioned errors and warnings at different stages of processing. Kindly help.
PS : We have already ⭐ ed your repo 😉