Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some question about paper #9

Closed
jyang68sh opened this issue Jul 15, 2022 · 4 comments
Closed

Some question about paper #9

jyang68sh opened this issue Jul 15, 2022 · 4 comments

Comments

@jyang68sh
Copy link

Hi,

So, after carefully read the paper, I am not sure if I got the paper correctly.

The paper proposed a loss which is helpful to find out abnormal class.

steps:
1. formulate D_in and D_out. D_in should not overlap with D_out
2. Train the model with D_IN
3. Retrain model with D_OUT
Question: what do you mean by fine-tune only the final classification block using the loss in (2)

Thanks!

@yyliu01
Copy link
Collaborator

yyliu01 commented Jul 15, 2022

Hi,

In step 2), training the model is implemented by other works, which is why we must load the pre-trained checkpoint.

All we do is step 3), but we are not re-training the entire mode, but only fine-tuning the last classification block as shown in here. The reason we utilise the phrase "fine-tuning" is that we also partially load the weight of that classification block in here.

On the other hand, we fine-tune the final block with both D_IN and D_OUT (in step 3). ), as the input data contains both driving scenes and the synthetic OOD regions. You can find it in Fig 2. Page 6.

Regards,
Yuyuan

@yyliu01
Copy link
Collaborator

yyliu01 commented Jul 16, 2022

closing the issue, but feel free to reopen.

@yyliu01 yyliu01 closed this as completed Jul 16, 2022
@jyang68sh
Copy link
Author

jyang68sh commented Jul 18, 2022

The minimisation of the loss in (3) will abstain from classifying outlier
pixels into one of the inlier classes, where a pixel is estimated to be an outlier
with aω

Hi @yyliu01
From what I understood, we want to decrease the loss where it would not classify outlier pixel into one of the inlier class. But according to the paper, the minim of the PAL loss is different.

Could you please explain? Thanks!

@yyliu01
Copy link
Collaborator

yyliu01 commented Jul 27, 2022

@jyang68sh Sorry, I didn't see your post here, please reopen the issue if your feel the question is not well answered.

Your understand is totally correct, as the confident of the extra channel will add back to the inlier after divide the "reward" value. In this part, PAL guided this "reward" via an energy-based (i.e., EB) regularisation.

In case you still confuse the functions in the paper, please feel free to send me an email or re-open the issue.

Regards,
Yuyuan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants