Skip to content

mattneary/attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

attention

Visualizing attention weights is the easiest and most popular approach to interpret a model’s decisions and to gain insights about its internals. Although it is wrong to equate attention with explanation, it can offer plausible and meaningful interpretations. (Abnar and Zuidema, 2020)

Attention is the key mechanism of the transformer architecture that powers GPT and other LLMs. This project exposes the attention weights of an LLM run, aggregated into a matrix.

Here's an example of what the matrix output of this project will look like:

attention matrix

Why model attention as a matrix? Given attention matrix m you can model a range of text as focus vector f and then multiply torch.matmul(f, m) to get the attention vector for that range.

When you run the flask app, you can use an interactive demo in which attention weights for selected text are visualized:

attention highlighting

How to Run

$ poetry install
$ poetry run flask --app attention run

Once it's running, you can access the demo at http://127.0.0.1:5000/static/index.html.

About

visualizing attention for LLM users

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published