New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some question about paper #9
Comments
Hi, In step 2), training the model is implemented by other works, which is why we must load the pre-trained checkpoint. All we do is step 3), but we are not re-training the entire mode, but only fine-tuning the last classification block as shown in here. The reason we utilise the phrase "fine-tuning" is that we also partially load the weight of that classification block in here. On the other hand, we fine-tune the final block with both D_IN and D_OUT (in step 3). ), as the input data contains both driving scenes and the synthetic OOD regions. You can find it in Fig 2. Page 6. Regards, |
closing the issue, but feel free to reopen. |
The minimisation of the loss in (3) will abstain from classifying outlier Hi @yyliu01 Could you please explain? Thanks! |
@jyang68sh Sorry, I didn't see your post here, please reopen the issue if your feel the question is not well answered. Your understand is totally correct, as the confident of the extra channel will add back to the inlier after divide the "reward" value. In this part, PAL guided this "reward" via an energy-based (i.e., EB) regularisation. In case you still confuse the functions in the paper, please feel free to send me an email or re-open the issue. Regards, |
Hi,
So, after carefully read the paper, I am not sure if I got the paper correctly.
The paper proposed a loss which is helpful to find out abnormal class.
steps:
1. formulate D_in and D_out. D_in should not overlap with D_out
2. Train the model with D_IN
3. Retrain model with D_OUT
Question: what do you mean by fine-tune only the final classification block using the loss in (2)
Thanks!
The text was updated successfully, but these errors were encountered: