Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom data set #82

Closed
IgnacioSan22 opened this issue May 31, 2022 · 10 comments
Closed

Custom data set #82

IgnacioSan22 opened this issue May 31, 2022 · 10 comments

Comments

@IgnacioSan22
Copy link

I'm trying to train the model with a completely different dataset. I'm struggling to set the values in the config file, is there some documentation about setting them correctly or their meaning/influence?

@thangvubk
Copy link
Owner

Please check here for the config explanation.

@IgnacioSan22
Copy link
Author

I only have two classes, with unbalance proportion of points, and independently of how I set the config parameters after 1 or 2 epochs of training the semantic part predicts zero points for the second class. Do you know why this could be?

@theshapguy
Copy link

I'm also facing a similar issue with unbalanced points, I have 4 classes and only 1 of them is being classified.

I even tried using the WeightedRandomSampler rather than the DistrubutedSampler but I'm still having the same issue

@SijanNeupane49
Copy link

SijanNeupane49 commented Jun 5, 2022

I am also having same kind of issue. I also have unbalanced proportion of points in my datasets. I have 3 classes Leaf, stem and node. For instance, in one of my 3D model(16th model) the no .of points are as following:

  1. Leaf = 3,77,131
  2. Stem = 17,811
  3. Node = 4667

I also tried changing many parameters in config file, updated weights according to class distribution ratio

def point_wise_loss(self, semantic_scores, pt_offsets, semantic_labels, instance_labels,
                        pt_offset_labels):
        losses = {}
        semantic_loss = F.cross_entropy(
            semantic_scores, semantic_labels, weights= torch.tensor([1,17,75]), ignore_index=self.ignore_label)
        losses['semantic_loss'] = semantic_loss

to model/softgroup.py (on line 144) to balance the class imbalance but still the semantic part predicts to zero for the second and third class.
I have also posted similar kind of issues in #76 with screenshots of my results.

@SijanNeupane49
Copy link

@IgnacioSan22 @theshapguy did you guys manage to solve these issues?If yes, please help me as well. Thanks in advance.

@SijanNeupane49
Copy link

@thangvubk can you please maybe suggest us something regarding this issue? It would be immense help. Thank you in advance.

@thangvubk
Copy link
Owner

On dataset with imbalanced point per class i have two suggestions.

  1. Using weighted semantic loss. Follow this.
  2. Train longer. Ussually, class with more points will converge first, then class with less points.

@SijanNeupane49
Copy link

@thangvubk Thank you for your quick reply. How did you calculated those semantic weights for stpls3d? How can I do that for my custom dataset?

@zhongxiaj
Copy link

Please check here for the config explanation.

Great work!
I have some questions;
How can I organize my own my dataset ? is these any guideline for more detail?(such as how to organize my own .tsv(scannetv2-labels.combined.tsv), .aggregation.json,.segs.json) is there any codes for detail ?

@xiaotiancai899
Copy link

Please check here for the config explanation.

Great work! I have some questions; How can I organize my own my dataset ? is these any guideline for more detail?(such as how to organize my own .tsv(scannetv2-labels.combined.tsv), .aggregation.json,.segs.json) is there any codes for detail ?

Hi! Have you done this? And I also have the same question as you do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants