Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question concerning OOD detection #7

Open
romain-martin opened this issue Feb 1, 2024 · 2 comments
Open

Question concerning OOD detection #7

romain-martin opened this issue Feb 1, 2024 · 2 comments

Comments

@romain-martin
Copy link

First of all, thank you for your work.
The method is promising and your article is very interesting, so I tried to use it in two way:

  • determining whether a detected object is a False Positive
  • determining the absence of an object in an image

I'm using the .pt weights you kindly provided, and I tried to implement the ATD and the CTW methods.
However the results were really bad leading me to think I missed something, on my first usecase the prompt was only:
"A photo of a person with a {}" ("A photo of a person without a {}") with "hat", "cap", "helmet" as the class names.
Using ATD everything is considered as an OOD, using CTW almost everything is considered as an ID.
I have some question regarding your paper:
Do you have a reference or a paper explaining where Eq.4 comes from? So regarding the CTW method, Eq.4 should be over 0.5 for the classification to be OOD.
And also from where comes the Eq.8?
As for the Eq.6, to compute pij, this is a kind of softmax right? Just adding the temperature parameter?
In this case, wouldn't the ATD method be unusable when you only have one class and just want to discard the FP as pij is equal to 1?
The first thing that came to my mind was to find the index of maximum value in logits, and check logits[index] > logits_no[index] to check if it's an ID or an OOD, however I suppose it's mathematically incorrect as you didn't mention it in your paper, and the test I ran also led to bad results.

Here are the functions I wrote for ATD and CTW from what I understood from your paper, they are kind of raw as it's a wip. I used the code in "handcrafted" folder, from what I understood this is the one to use when dealing with custom prompts and not the learned ones.
Both of them takes the logits and logits_no computed this way:
logits = F.normalize(feat, dim=-1, p=2) @ fc_yes.T
logits_no = F.normalize(feat, dim=-1, p=2) @ fc_no.T
As well as a tau parameter, I set it to 1 for now.

def CTW(logits_yes, logits_no, tau):
    yes = logits_yes[0].detach().tolist()
    no = logits_no[0].detach().tolist()
    pij = []
    denominator = 0
    for i in range(len(yes)):
        denominator += math.exp(yes[i] / tau)
    for i in range(len(yes)):
        pij.append(math.exp(yes[i] / tau) / denominator)
    pijno = []
    for i in range(len(no)):
        pijno.append(math.exp(no[i]/tau) / (math.exp(yes[i]/tau) + math.exp(no[i]/tau)))
    index = pij.index(max(pij))
    bestood = pijno[index]
    return (index, 1 - bestood > bestood)
def ATD(logits_yes, logits_no, tau):
    ood = 1.
    yes = logits_yes[0].detach().tolist()
    no = logits_no[0].detach().tolist()
    pijno = []
    for i in range(len(no)):
        pijno.append(math.exp(no[i]/tau)/(math.exp(yes[i]/tau) + math.exp(no[i]/tau)))
    pij = []
    denominator = 0
    for i in range(len(yes)):
        denominator += math.exp(yes[i]/tau)
    for i in range(len(yes)):
        pij.append(math.exp(yes[i]/tau)/denominator)
    index = pij.index(max(pij))
    for i, pno in enumerate(pijno):
        ood -= (1 - pno)*pij[i]
    res = 0
    for pyes in pij:
        if pyes > ood:
            res = 1
    return (index, res)

The return value is 1 if it's an ID and 0 otherwise.
The model is in eval mode and I use process_test function returned by load_model() function to preprocess the images I load using Pil Image.open().
So I don't know if I did something wrong or if I "just" need to retrain the model.
Thank for your help!

@SiLangWHL
Copy link
Collaborator

Hi,

Sorry for the late reply.

  • "I'm using the .pt weights you kindly provided," The provided weights are for CLIPN with learnable prompts. So these models didn't know the meaning of negative keywords. That means you should train CLIPN (hand-crafted prompts) by yourself.
  • "In this case, wouldn't the ATD method be unusable when you only have one class and just want to discard the FP as pij is equal to 1?" It seems right. CLIPN works based on finding the best ID score or adjusting ID scores. When deploying it to binary classification task, saying no probability is functionally equal to the (1 - ID probability).
  • "Do you have a reference or a paper explaining where Eq.4 comes from? " Referring to Figure 4, equation 4 is used to teach the CLIP model to dis-match the image with its positive text. The motivation is opposite to the original contrastive loss in the CLIP paper.

@romain-martin
Copy link
Author

Hello,
Thank for your answer,

ok I thought the weights already knew the negative keywords meaning. Just to be sure, as I may misunderstood the difference between hand-crafted prompts (1) and learnable ones (2).
(1) Mean ClipN has two text encoders, one of them being used for negatives prompts to further use ATD or CTW strategy to enhance the original CLIP ability to perform zero-shot classification, meaning that it can assess if an image doesn't match the given categories.
(2) Has the same purpose except that the negative weights have been learned so there is no need of the text encoders anymore, the weights are already embedded in the model?
Correct me if I'm wrong, but in this case, isn't (1) more general so better suited for zero-shot classification?

  • Is it planned to release model with hand crafted prompts? If not I can do it myself using the run.sh in the handcrafted directory using CC3M for example to learn how to say no? Or would it be better to train using images from my usecase as positive prompt and CC3M or ImageNet as negative prompts? The goal is mainly to determine whether an image passed is a False Positive (does not belong to a list of predetermined categories). How long did it take you to train using 4 gpus?

  • As for Eq.4, I understand the purpose but from where comes this equation? (3) Look like a kind of cross Entropy Loss, but as for (4) it looks as a softmax function to get the probability of being a no, right?

Thank you again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants