-
Notifications
You must be signed in to change notification settings - Fork 759
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classification words importance #17
Comments
Good idea, and thanks for pointing me to that code. I've been experimenting with gradient-based sensitivity analysis on the attention weights (rather than the inputs), but the results weren't particularly interpretable. But I may revisit this with some of the newer fine-tuned models. I agree that this would be a useful analysis for the inputs, though it may overload the visualization. I'll think more about this. Thanks. |
I've been wondering how to do this as well as I want to try and visualize which words were most important to the classification. One idea I've had is (if I have 12 encoders in BERT and am only fine-tuning the 12th layer) to take the output of the 11th layer, the Wq, Wk, and Wv weights of the fine-tuned 12th layer and calculate the score manually. Would that be the correct way to think of it? I would essentially get the value score for each token in the input. |
Hi, @jessevig Thanks for your great work. As @lspataro @Azharo mentioned, I want to ask some more details. Lets see an example below: Text classifies: NLIs: We want to get results like these, maybe not a relatively complex visualization results as @jessevig showed, I think this work may need to be done in the last layer of Bert. Furthermore, I would like to see a process. |
Hi there, anyone working on this topic? I am looking for a way to identify the most important words in a sentence classification task as well. |
+1 |
1 similar comment
+1 |
same question here :) |
Hi, I did the implementation of some gradient-based algorithms, you can check it here: https://github.com/koren-v/Interpret |
I can definitely recommend captum as well, they have an example using BERT: https://captum.ai/tutorials/Bert_SQUAD_Interpret |
+1 |
This package does word importance directly with huggingface transformers, using captum https://github.com/cdpierse/transformers-interpret
|
Closing this issue as I feel it's out of scope for BertViz given the other available libraries. |
Yeah Same question |
I try to use the code in this link https://github.com/cdpierse/transformers-interpret for a multi classification task for my pretrained model using bert function.I trained my model and save it using model.save(). then when I try to load the model path to model I got this error
|
Is there any way to use bertviz to visualise the importance of the different words respect to a given prediction of a classification task (BertClassifier)?
Similar to this: https://docs.fast.ai/text.interpret.html#interpret
Thank you
The text was updated successfully, but these errors were encountered: