-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example for LMGradientAttribution is missing. #1237
Comments
@saxenarohit thanks for reminding us. We will add it soon |
Hi aobo, if you are busy, could you tell me which model layer need I take as a parameters in the LayerIntegratedGradients ? |
hi @Dongximing , it should be the embedding layer of your model. As a token is discrete, its backpropagate gradient stop at its embedding. For Llama2, it would something like the following emb_layer = model.get_submodule("model.embed_tokens")
lig = LayerIntegratedGradients(model, emb_layer) |
Thanks, I saw the result, and analysis the code, the final results are computed on log_softmax. and is that means if a contribution in this way, -10,20,-20. the token_1 and token_2 are both important? or we need "abs()" to eval the important of tokens? |
馃摎 Documentation
This is in reference to the tutorial page below.
https://captum.ai/tutorials/Llama2_LLM_Attribution
I could not find the example for LLMGradientAttribution for LLAMA2.
Any help on this will be appreciated.
Thanks
The text was updated successfully, but these errors were encountered: