Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the mean of target_gradient[0].shape == torch.Size([50257, 768])),in attack.py #1

Closed
ywxsuperstar opened this issue May 22, 2024 · 1 comment

Comments

@ywxsuperstar
Copy link

What does target_gradient[0].shape == torch.Size([50257, 768]) mean in attack.py? When I use the text-classification setting of GPT, this condition is not met, and I don't understand the purpose of this equality check.

@SamuelGong
Copy link
Owner

SamuelGong commented May 22, 2024

Thanks for reaching out! If I remember correctly, TAG assigns different layers in a language model with different weights, where the layers closer to the output get assigned larger weights.

Now consider the case of GPT-2. As you may know, in the model the first layer is the word embedding layer, which translates a token from a vocabulary of size 50257 into an embedding of length 768. In fact, the last layer of GPT-2 is the fully connected layer that reuses the weights of the first layer to translate an internal state of size 768 into an output token from a 50257-token vocabulary.

Since they share the weight parameters $w$. Now it comes to the problem: which array in the gradients computed by PyTorch is the gradient of $w$? Actually, there is not a definitive answer.

In some cases, it is the first array of the gradients; while in other cases, it is the last array of the gradients. This depends on the PyTorch or transformer library we use. I did observe both cases in my experiments in different platforms, but just can't remember exactly the precise condition.

However, this matters because in one case, $w$ will be assigned to the smallest weight in TAG, while in the other case will be the other way around. But in TAG, I believe the idea is consistent: that is, to assign $w$ with the largest weight.

Thus, we need to make sure that the gradient of $w$ is indeed the last array in the computed gradients. That is why I have the mentioned check and conditional jump. Actually, I have also left some hints in the comments around this check.

# if the first layer is transformer.wte.weight, it should

Back to your problem, if the condition is not met, so be it. It means that the order of your computed gradient is the expected one, and thus the following code can be just skipped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants