Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in training/evaluation: some data missed #39

Closed
DBobkov opened this issue Aug 22, 2017 · 8 comments
Closed

Bug in training/evaluation: some data missed #39

DBobkov opened this issue Aug 22, 2017 · 8 comments

Comments

@DBobkov
Copy link

DBobkov commented Aug 22, 2017

Dear authors,

I have noticed that for training/evaluation (https://github.com/charlesq34/pointnet/blob/master/train.py#L187) you iterate over the whole number of batches. However, for the number of instances not divisible by batch_size this would result in the fact that some instances are never seen for training or evaluation. For example, for M40 train size=9840, test size=2468. These are not divisible by default batch size of 32, which means you miss out 4 instances for testing and 16 instances for training. This can have dramatic consequences for much larger batch sizes (which make sense for GPUs with large amount of RAM).

If you confirm this, I can create a pull request, which fixes this problem.

Best
Dmytro Bobkov

@daerduoCarey
Copy link
Collaborator

Hi, Dmytro,

Thank you for your note. For the training procedure, missing some data point is okay since we do random shuffle at the beginning of every epoch. For the testing procedure in our code, we only use it for the validation set to tune the model hyperparameters. We use another program to calculate the accuracy across all data point in the testing set and report this number in the paper.

I think this is not so important because of the random shuffle we did at the beginning of every epoch training.

@charlesq34 , you can comment on this. What do you think?

Thank you very much!

Bests,
Kaichun

@DBobkov
Copy link
Author

DBobkov commented Aug 23, 2017

Hi Kaichun,

I see. I am wondering, what batch size did you use for training? For example, in my case when training PointNet on Stanford dataset with 1900 objects and batch size of 1000, 900 objects are not seen for training in one epoch (~47% of the dataset). Or am I using too large batches? According to Keskar et al large batch sizes can lead to bad performance.

While we are on the topic of Stanford, did you also observe low classification accuracy of PointNet for noisy occluded datasets (e.g. 60% the highest for Stanford objects, where parts of the objects are often missing)?

Best
Dmytro

@daerduoCarey
Copy link
Collaborator

We uses 64, 128, 256 for 3d CAD model experiments. As for scene semantic segmentation task, I refer to @charlesq34 .

I guess that in your case, you can use 1024 as batch size. I think it's fine if there is no memory issue for you. I think if you really have 1,900 data points and you are using batch size 1,000. You are basically doing random sampling 1,000 data from 1,900 data points at every batch. So, the concepts of epoch and batch are the same here.

For the stanford dataset, what is the dataset? Are you talking about the building parser dataset?

Thanks.
Kaichun

@DBobkov
Copy link
Author

DBobkov commented Aug 23, 2017

Dear Kaichun,

got it, thank you.

Yes, building parser dataset with real point cloud data of indoor objects, like chairs, tables, doors, etc.

Best,
Dmytro

@daerduoCarey
Copy link
Collaborator

We have done one experiment using Blensor to simulate partial Kinect-style scans from the ShapeNet 3D CAD model. Often time the models are ~30-50% occluded. And Kinect-style noise is added. The experiment shows that PointNet still works pretty well for both object classification and part segmentation task. The details can be found in the paper.

Thanks.
Kaichun

@DBobkov
Copy link
Author

DBobkov commented Aug 24, 2017

Kaichun,

yes, but:

  1. you do not provide any quantitative results on Blensor-simulated results in Fig. 3, especially for object classification.

  2. it is unclear how exactly you generated data for Fig.8 in supplementary material. Is "one view of the point cloud" referring to Blensor simulations? How are the points exactly dropped, at random from uniform distribution? If yes, this does not represent realistic occlusion, rather subsampling.

  3. you do not provide any quantitative results for Stanford building parser object classification. My training gives around 55-60% accuracy of PointNet, was wondering whether this is reasonable.

Because the discussion goes beyond the issue topic, I will close the issue after your answer.

@daerduoCarey
Copy link
Collaborator

  1. I think we provided the quantitative comparison for part segmentation task, please check Page 6 "3D Object Part Segmentation" session, last paragraph: "Results show that we lose only 5.3%
    mean IoU." We may not include the classification task numbers but I remember that I did one experiment showing that the performance does not drop so much. Maybe drops for 3-5% classification accuracy.
  2. Sorry for the confusion. But Fig. 8 is a totally different story than partial point cloud data. Fig. 8 is using full point cloud, but with randomly and uniformly dropped points. In Fig. 8, we randomly drop out the points from the input point cloud. The "One view point cloud" means the full point cloud (after randomly selected points dropped out) with no rotation applied. On contrary, the 12-view means we rotate the point cloud for 12 times, each time for 30 degrees.
  3. Yes, you are right. We didn't use the Stanford dataset for object classificaion. We used it mainly for scene semantic parsing task. I cannot say too much about the performance since I didn't personally try it. But I guess 60% sounds reasonable. Please make sure that you have pre-process the data correctly and normalize all objects into unit cubes. It is also quite important to re-train the network using the partial data.

Thanks.

Bests,
Kaichun

@DBobkov
Copy link
Author

DBobkov commented Aug 24, 2017

Kaichun,

thank you, this was helpful!

Best,
Dmytro

@DBobkov DBobkov closed this as completed Aug 24, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants