Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some code problems #1

Closed
zhouzhen0720 opened this issue Oct 19, 2023 · 5 comments
Closed

some code problems #1

zhouzhen0720 opened this issue Oct 19, 2023 · 5 comments

Comments

@zhouzhen0720
Copy link

Firstly, congratulations on being accepted to the NIPS2023 conference!
Here are some issues I encountered while running the code:

train_net.py line 113: self.model.module.criterion.matcher.iter = self.iter
AttributeError: 'RankDETR' object has no attribute 'module'

It seems that the right code is: self.model.criterion.matcher.iter = self.iter

rank_transformer.py line 279: outputs_class_tmp = self.class_embedlayer_idx
TypeError: 'NoneType' object is not subscriptable

The reason for the error is likely to be line 167: "self.class_embed = None"

Looking forward to your reply.

@yifanpu001
Copy link
Collaborator

Hi Zhouzhen, I appreciate your interest in our work.

For problem 1, I believe you are running this code using only one GPU. When we create the model in model = create_ddp_model(model, **cfg.train.ddp), the model will be wrapped with DistributedDataParallel if you are using 8 GPUs. The issue occurs when you are only using 1 GPU (see ./detectron2/detectron2/engine/defaults.py, line 70 for more details). To fix this issue, I add a condition to determine whether to use ".module".

Regarding problem 2, I haven't encountered the same situation. Although in line 167 of rank_transformer.py, there is a "self.class_embed = None" in the init method of RankDetrTransformerDecoder, it will be rewrite by some Linear head in rank_detr.py line 148. Could you please provide more details to reproduce this error?

@zhouzhen0720
Copy link
Author

@yifanpu001 Thanks for your reply.
I think problem 2 is because I used the configuration file named h_deformable_detr_r50_50ep.py, where as two stage is set to False. As a result, the class_embed is not passed to the decoder. I noticed that this error does not occur in other configuration files that use two-stage settings.

Additionally, I have another question. In line 280 of rank_transformer.py, there is no inclusion of Sl as a learnable embedding. However, in lines 264-269 of rank_detr.py, it is included. Is this difference in the code intentional, or is it designed that way?

@yifanpu001
Copy link
Collaborator

Hi, we designed it this way. Inside the QRL, we obtain the confidence score from the classification head without adding any additional learnable logit bias. We have clarified it on page 4 of our paper.

@zhouzhen0720
Copy link
Author

thanks a lot!

@yhj-1
Copy link

yhj-1 commented Jul 21, 2024

Hi Zhouzhen, I appreciate your interest in our work.

For problem 1, I believe you are running this code using only one GPU. When we create the model in model = create_ddp_model(model, **cfg.train.ddp), the model will be wrapped with DistributedDataParallel if you are using 8 GPUs. The issue occurs when you are only using 1 GPU (see ./detectron2/detectron2/engine/defaults.py, line 70 for more details). To fix this issue, I add a condition to determine whether to use ".module".

Regarding problem 2, I haven't encountered the same situation. Although in line 167 of rank_transformer.py, there is a "self.class_embed = None" in the init method of RankDetrTransformerDecoder, it will be rewrite by some Linear head in rank_detr.py line 148. Could you please provide more details to reproduce this error?

Although "self. class_imbed" will be overridden by a Linear header in line 148 of rank_detr. py, its condition is that it must be "as two stages=True". I am training the "rank_detr_r50_50ep. py" configuration file and do not need "as_two_stage". This problem is still unresolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants