You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What does target_gradient[0].shape == torch.Size([50257, 768]) mean in attack.py? When I use the text-classification setting of GPT, this condition is not met, and I don't understand the purpose of this equality check.
The text was updated successfully, but these errors were encountered:
Thanks for reaching out! If I remember correctly, TAG assigns different layers in a language model with different weights, where the layers closer to the output get assigned larger weights.
Now consider the case of GPT-2. As you may know, in the model the first layer is the word embedding layer, which translates a token from a vocabulary of size 50257 into an embedding of length 768. In fact, the last layer of GPT-2 is the fully connected layer that reuses the weights of the first layer to translate an internal state of size 768 into an output token from a 50257-token vocabulary.
Since they share the weight parameters $w$. Now it comes to the problem: which array in the gradients computed by PyTorch is the gradient of $w$? Actually, there is not a definitive answer.
In some cases, it is the first array of the gradients; while in other cases, it is the last array of the gradients. This depends on the PyTorch or transformer library we use. I did observe both cases in my experiments in different platforms, but just can't remember exactly the precise condition.
However, this matters because in one case, $w$ will be assigned to the smallest weight in TAG, while in the other case will be the other way around. But in TAG, I believe the idea is consistent: that is, to assign $w$ with the largest weight.
Thus, we need to make sure that the gradient of $w$ is indeed the last array in the computed gradients. That is why I have the mentioned check and conditional jump. Actually, I have also left some hints in the comments around this check.
# if the first layer is transformer.wte.weight, it should
Back to your problem, if the condition is not met, so be it. It means that the order of your computed gradient is the expected one, and thus the following code can be just skipped.
What does
target_gradient[0].shape == torch.Size([50257, 768])
mean inattack.py
? When I use the text-classification setting of GPT, this condition is not met, and I don't understand the purpose of this equality check.The text was updated successfully, but these errors were encountered: