Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No pseudo/hypothetical labels founded in the code. #8

Closed
SupeRuier opened this issue Aug 2, 2021 · 1 comment
Closed

No pseudo/hypothetical labels founded in the code. #8

SupeRuier opened this issue Aug 2, 2021 · 1 comment

Comments

@SupeRuier
Copy link

SupeRuier commented Aug 2, 2021

Hi Jordan Ash,

In your paper, the gradient embedding is from the loss between the network output and the hypothetical labels (inferred from the network output).

However, in your code, I didn't find anything about pseudo/hypothetical labels.

In the file badge_sampling.py, it seems that you directly use the true labels to guide your selection. If so, this would be an unfair comparison.

    gradEmbedding = self.get_grad_embedding(self.X[idxs_unlabeled], self.Y.numpy()[idxs_unlabeled]).numpy()

I'm not sure if I miss something. Could your show how you use the hypothetical labels in your code?

Thanks,
Rui

@SupeRuier
Copy link
Author

Sorry I found it here in file strategy.py.

batchProbs = F.softmax(cout, dim=1).data.cpu().numpy()
maxInds = np.argmax(batchProbs,1)

Sorry for interrupting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant