We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi Jordan Ash,
In your paper, the gradient embedding is from the loss between the network output and the hypothetical labels (inferred from the network output).
However, in your code, I didn't find anything about pseudo/hypothetical labels.
In the file badge_sampling.py, it seems that you directly use the true labels to guide your selection. If so, this would be an unfair comparison.
badge_sampling.py
gradEmbedding = self.get_grad_embedding(self.X[idxs_unlabeled], self.Y.numpy()[idxs_unlabeled]).numpy()
I'm not sure if I miss something. Could your show how you use the hypothetical labels in your code?
Thanks, Rui
The text was updated successfully, but these errors were encountered:
Sorry I found it here in file strategy.py.
strategy.py
batchProbs = F.softmax(cout, dim=1).data.cpu().numpy() maxInds = np.argmax(batchProbs,1)
Sorry for interrupting.
Sorry, something went wrong.
No branches or pull requests
Hi Jordan Ash,
In your paper, the gradient embedding is from the loss between the network output and the hypothetical labels (inferred from the network output).
However, in your code, I didn't find anything about pseudo/hypothetical labels.
In the file
badge_sampling.py
, it seems that you directly use the true labels to guide your selection. If so, this would be an unfair comparison.I'm not sure if I miss something. Could your show how you use the hypothetical labels in your code?
Thanks,
Rui
The text was updated successfully, but these errors were encountered: