Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to programmatically extract attribution scores per token? #160

Closed
1 task done
MoritzLaurer opened this issue Jan 14, 2023 · 5 comments · Fixed by #157
Closed
1 task done

How to programmatically extract attribution scores per token? #160

MoritzLaurer opened this issue Jan 14, 2023 · 5 comments · Fixed by #157
Labels
question Further information is requested

Comments

@MoritzLaurer
Copy link

Checklist

  • I've searched the project's issues.

❓ Question

How do I programmatically extract the per-token scores to have them in a list or dictionary, mapped to each token?

I understand how to show the scores per token visually, but I don't how to extract them from the "out" object for further downstream processing

model = inseq.load_model("google/flan-t5-base", attribution_method="discretized_integrated_gradients")
out = model.attribute(
    input_texts=["We were attacked by hackers. Was there a cyber attack?", "We were not attacked by hackers. Was there a cyber attack?"],
)
out.sequence_attributions[0]
out.show()
@MoritzLaurer MoritzLaurer added the question Further information is requested label Jan 14, 2023
@gsarti
Copy link
Member

gsarti commented Jan 14, 2023

Hi @MoritzLaurer, thank you for your interest! This part is still quite undocumented, but we hope to add more details in the docs soon!

At the end of the Getting started section in the docs we show an example of the attribution output, that I report here:

>>> print(out)
FeatureAttributionOutput({
    sequence_attributions: list with 1 elements of type GradientFeatureAttributionSequenceOutput: [
        GradientFeatureAttributionSequenceOutput({
            source: list with 13 elements of type TokenWithId:[
                '▁Hello', '▁world', ',', '▁here', '\'', 's', '▁the', '▁In', 'se', 'q', '▁library', '!', '</s>'
            ],
            target: list with 12 elements of type TokenWithId:[
                '▁Bonjour', '▁le', '▁monde', ',', '▁voici', '▁la', '▁bibliothèque', '▁Ins', 'e', 'q', '!', '</s>'
            ],
            source_attributions: torch.float32 tensor of shape [13, 12, 512] on CPU,
			...
        })
    ],
    step_attributions: None,
    info: {
        ...
    }
})

As you can see, the source sequence contains 13 tokens and the target contains 12 tokens, while the attribution computed with a gradient-based method is a 3D tensor shape [src_len, tgt_len, hidden_size]. When you call out.show() to visualize the attribution output, the out.aggregate() method is called before visualizing the scores, which in turn makes use of the default Aggregator associated to the output class (the out._aggregator property).

For gradient methods, the default aggregator used is a SequenceAttributionAggregator that squeezes the last hidden_size dimension to return the 2D tensor that is finally passed on for visualization (⚠️ Note: the default way in which the squeezing is performed will change with #157 to become the L2 norm of the per-token vector, which was shown to be better in faithfulness terms by Bastings et al. 22, inter alia)

To obtain the same output and pair it with the tokens, assuming a gradient method that returns a 3D tensor, you could do something like:

import inseq

model = inseq.load_model("Helsinki-NLP/opus-mt-en-fr", "saliency")

# Produces a FeatureAttributionOutput containing 1 GradientFeatureAttributionSequenceOutput
out = model.attribute(<YOUR_INPUT>)

# The source and, if present, target attributions have shapes of [src_len, tgt_len] and [tgt_len, tgt_len] 
# respectively after this step
aggregated_attribution = out.sequence_attributions[0].aggregate()

# Creating a mapping of [src_token, tgt_token] -> attribution score
score_map = {}
for src_idx, src_tok in enumerate(aggregated_attribution.source):
    for tgt_idx, tgt_tok in enumerate(aggregated_attribution.target):
        score_map[(src_tok.token, tgt_tok.token)] = aggregated_attribution.source_attributions[src_idx, tgt_idx].item()

print(score_map)
{('▁Hello', '▁Bonjour'): 0.8095492720603943,
 ('▁Hello', '▁le'): 0.5914772152900696,
 ('▁Hello', '▁monde'): 0.655048131942749,
 ('▁Hello', ','): 0.6247086524963379,
 ('▁Hello', '▁voici'): 0.7142019271850586,
 ('▁Hello', '▁la'): 0.623748779296875,
 ('▁Hello', '▁bibliothèque'): 0.3409218192100525,
 ('▁Hello', '▁Ins'): 0.28728920221328735,
 ('▁Hello', 'e'): 0.18802204728126526,
 ('▁Hello', 'q'): 0.13516321778297424,
 ('▁Hello', '!'): 0.792391300201416,
 ('▁Hello', '</s>'): 0.7535314559936523,
 ('▁world', '▁Bonjour'): 0.39373481273651123,
 ('▁world', '▁le'): 0.3593481779098511,
...

Hope it helps! I'd be curious to hear ideas you might have on how a better API to access such scores could look like!

@inseq-team inseq-team deleted a comment from github-actions bot Jan 15, 2023
@MoritzLaurer
Copy link
Author

great, that works, thanks! (intuitively I would probably enable people to return this as a pandas dataframe for downstream analysis, but that would probably add another dependency)

@gsarti
Copy link
Member

gsarti commented Jan 16, 2023

I am not sure we want pandas as dependency, since that would be the only use-case for it. Would a list of dicts iǹ record format also work in your opinion? Every dict would have src_token_x as key and as value a dict of tgt_token_x: src_x_to_tgt_x_saliency scores. The user could then feed this to pd.DataFrame() to produce a dataframe matching the format of the original attribution tensor.

@MoritzLaurer
Copy link
Author

yeah I think that makes sense. a format that enables easy transformation to a df e.g. with df = pd.DataFrame(output_dic), but without the new dependency

@gsarti gsarti mentioned this issue Jan 18, 2023
9 tasks
@gsarti gsarti linked a pull request Jan 18, 2023 that will close this issue
9 tasks
@gsarti
Copy link
Member

gsarti commented Jan 18, 2023

Extracting scores and convert them in pandas format will be made easier by get_scores_dicts introduced in #157

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants