-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the dataloader #2
Comments
@lailvlong Hey, thanks for your interest in our project! The short answer to your question is Yes, but it doesn't violate the evaluation protocol. Lines 110 to 112 in 238b1de
You will see that we access the reference tensors (containing both TRAIN & TEST split) based on"data_corpus", which is generated in the constructor according to the input "config_path". Look back to the main training script https://github.com/voidstrike/FPSG/blob/main/src/trainNetwork.py#L85-L90 you will see the "config_path" are indeed different. Moreover, noted that the base classes and novel classes are mutually exclusive. We use all data from base classes to train the model and test the model on novel classes, it's not a standard 80/20 split. |
Thanks for your reply. Lines 85 to 87 in 238b1de
And the “reference_path” contains the files of both the base and novel categories. Thus, for the train dataloader, data of novel categories would also be loaded into "self.img_corpus" and "self.pc_corpus". Lines 130 to 153 in 238b1de
During iterating the train dataloader, data in the "self.img_corpus" and "self.pc_corpus" would be fetched as "ans[xad]" and "ans[pcad]", including those of novel categories. Lines 110 to 126 in 238b1de
When computing the intra_support loss during training, there are two options, namely "option 2" and "option 1". Lines 63 to 73 in 238b1de
The codes select "option 2", which uses the "ans[xad]" and "ans[pcad]" and may involves the data of the novel categories. This is the point that i feel puzzled. In contrast, the "option 1" that uses the "ans[xs]" and "ans[pcs]" should have been adopted. Moreover, i have tried both "option 1" and "option 2". With my observation, in ModelNet dataset, the performance of "option 1" (avg_cd=7.+) are much worser than that of "option 2" (avg_cd=2.+). It is very weried that the two options have such different performances. Or maybe that "option 2" has used the evaluation data for training. |
@lailvlong Oh my god, I see, that's definitely an information leak. The reason we adopt "option 2" is we found that "option 1" will give us almost the same point cloud during the test phase. We thought it was a model collapse because the model observes one and only one class per episode, therefore, we tried to make some variants by sampling random (IMG, PC) pairs from the dataset (so each episode contains more than one class). We just use "randperm" and forgot "self.img_corpus" and "self.pc_corpus" contain evaluation points. Thanks for your question and it shows that the problem is solved by information leak rather than bringing more classes in. Really sorry for the mistake. Last but not least, could you please somehow try to modify the code such that keeps random sampling from "self.img_corpus" and "self.pc_corpus" but avoid information leak? This one is absolutely based on your interest and feel free to ignore it. Sorry again for the bug. |
@voidstrike Yes, i also find that, during the test phase, the generated point clouds of different query images are very similar in an episode. In my opinion, it is not caused by model collapse but by that the support prototypes (class-specific prior) dominates the generation. Since reconstructing the shape of query image is much harder than restoring the shape of the support prototype (mean shape of the support point clouds). Meanwhile, restoring the shape of the support prototype can also offer a low loss. In this way, the model prefers to ignore the query image and output the same results. |
@lailvlong A good perspective! Thanks for your investigation & suggestions, considering more categories in one gradient step definitely sounds promising and makes sense to me. Unfortunately, I already left the university and I'm no longer working in this area. |
@lailvlong @voidstrike > |
Hello! Thanks for sharing your nice project.
I have some uncertainties about the data loader.
FPSG/src/datasets/modelnet.py
Lines 134 to 136 in 238b1de
In my understanding, the above codes load the samples of all the categories as training data, including those for evaluation. Do i make a wrong comprehension?
The text was updated successfully, but these errors were encountered: