Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the object classification and object localization #18

Open
dingjiansw101 opened this issue Jan 9, 2024 · 0 comments
Open

Comments

@dingjiansw101
Copy link

dingjiansw101 commented Jan 9, 2024

Dear Authors,

Thanks for sharing your great dataset and model. I have a few questions.

  1. In the paper, you mentioned that "multiple objects can be used to answer certain questions." And I noticed that in the annotation files, "object_names" may exist multiple objects related to a question. However, in Figure 4, there is only one object label score. I am confused about this. Why is there only one object label predicted? And which object label in the "object_names" is predicted?

  2. Regarding the Object Localization task, it seems that you predicted the scores for all candidate boxes. So you can find all objects in the "object_names", right? For the evaluation of object localization, all the objects in the "object_names" are involved, right?

  3. For the "how many" questions, sometimes, I find that the number of objects in "object_names" is not consistent with the predicted number in answers. Is this an annotation error?

  4. Have you trained the VoteNet and then fixed the parameters before the training of Scanqa model, or have you trained VoteNet and ScanQA model jointly?

Looking forward to your reply.

Best,
Jian Ding

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant