Skip to content

gvessio/LAYA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 

Repository files navigation

LAYA β€” Layer-wise Attention Aggregator

Open In Colab

Minimal reference implementation of LAYA, an interpretable output head that assigns input-conditioned attention weights to hidden layers. This example trains a simple MLP on Fashion-MNIST and visualizes global and class-wise attention profiles.

πŸš€ Usage

Click the badge above or open: LAYA.ipynb

The notebook:

  1. trains LAYA on Fashion-MNIST,
  2. evaluates accuracy and macro-F1,
  3. extracts layer-wise attention weights,
  4. plots global and class-wise attention patterns.

🧠 What is LAYA?

LAYA aggregates all hidden representations ( h_i ) using attention scores ( \alpha_i(x) ), producing:

  • depth-aware predictions,
  • intrinsic, per-sample interpretability without post-hoc methods.

About

Reference implementation of LAYA, a layer-wise attention aggregator for interpretable depth-aware neural networks.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors