Skip to content

MarkusWenzel/xai-proteins

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Insights into the inner workings of transformer models for protein function prediction

About

Finetuning pretrained universal protein language models to downstream tasks provides large benefits in protein function prediction. The used neural networks are, at the same time, notorious for having often millions and sometimes billions of trainable parameters. Therefore, it can be very difficult to interpret the decision making logic or strategy of these complex models.

Consequently, explainable machine learning is starting to gain traction in the field of proteomics too. We are exploring how explainability methods can help to shed light into the inner workings of transformers for protein function prediction.

Attribution methods, such as integrated gradients, make it possible to identify those features in the input space that the model apparently focuses on, because these features turn out to be relevant for the final classification decision of the model. We extended integrated gradients such that latent representations inside of transformers can be inspected too (separately for each head and layer).

To find out if the identified relevant sequence regions match expectations informed by knowledge from biology or chemistry, we combined this method with a subsequent statistical analysis across proteins where we correlated the obtained relevance with annotations of interest from sequence databases. In this way, we identified heads inside of the transformer architecture that are specialized for specific protein function prediction tasks.

The two folders of this repository are dedicated to the explainability analysis for the Gene Ontology (GO) term and Enzyme Commission (EC) number prediction (see the GO and EC README files) .

Publication

You find more information in our article:

Markus Wenzel, Erik Grüner, Nils Strodthoff (2024). Insights into the inner workings of transformer models for protein function prediction, Bioinformatics, btae031.

@article{10.1093/bioinformatics/btae031, author = {Wenzel, Markus and Grüner, Erik and Strodthoff, Nils}, title = "{Insights into the inner workings of transformer models for protein function prediction}", journal = {Bioinformatics}, pages = {btae031}, year = {2024}, month = {01}, issn = {1367-4811}, doi = {10.1093/bioinformatics/btae031}, url = {https://doi.org/10.1093/bioinformatics/btae031}}

Related works

If you are interested in this topic, you are welcome to have a look at our related papers:

Datasets

EC and GO data were preprocessed as detailed on https://github.com/nstrodt/UDSMProt with https://github.com/nstrodt/UDSMProt/blob/master/code/create_datasets.sh, resulting in six files for EC40 and EC50 on levels L0, L1, and L2, and in two files for GO "2016" (a.k.a. "temporalsplit") and GO "CAFA3". Preprocessed data can be accessed here (EC) and here (GO).

Authors

Markus Wenzel, Erik Grüner, Nils Strodthoff (2024)

About

Insights into the inner workings of transformer models for protein function prediction

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published