Skip to content

Implementation of PatchSAE as presented in "Sparse autoencoders reveal selective remapping of visual concepts during adaptation"

License

Notifications You must be signed in to change notification settings

dynamical-inference/patchsae

Repository files navigation

PatchSAE: Sparse Autoencoders Reveal Selective Remapping of Visual Concepts During Adaptation

Website & Demo Paper OpenReview Hugging Face Demo

๐Ÿš€ Quick Navigation

๐Ÿ›  Getting Started

Set up your environment with these simple steps:

# Create and activate environment
conda create --name patchsae python=3.12
conda activate patchsae

# Install dependencies
pip install -r requirements.txt

# Always set PYTHONPATH before running any scripts
cd patchsae
PYTHONPATH=./ python src/demo/app.py

๐ŸŽฎ Interactive Demo

Online Demo on Hugging Face ๐Ÿค— Website & Demo

Explore our pre-computed images and SAE latents without any installation!

๐Ÿ’ก The demo may experience slowdowns due to network constraints. For optimal performance, consider disabling your VPN if you encounter any delays.

Demo interface

Local Demo: Try Your Own Images

Want to experiment with your own images? Follow these steps:

1. Setup Local Demo

First, download the necessary files:

You can download the files using gdown as follows:

# Activate environment first (see Getting Started)

# Download necessary files (35MB + 513MB)
gdown --id 1NJzF8PriKz_mopBY4l8_44R0FVi2uw2g  # out.zip
gdown --id 1reuDjXsiMkntf1JJPLC5a3CcWuJ6Ji3Z  # data.zip

# Extract files
unzip data.zip
unzip out.zip

๐Ÿ’ก Need gdown? Install it with: conda install conda-forge::gdown

Your folder structure should look like:

patchsae/
โ”œโ”€โ”€ configs/
โ”œโ”€โ”€ data/      # From data.zip
โ”œโ”€โ”€ out/       # From out.zip
โ”œโ”€โ”€ src/
โ”‚   โ””โ”€โ”€ demo/
โ”‚       โ””โ”€โ”€ app.py
โ”œโ”€โ”€ tasks/
โ”œโ”€โ”€ requirements.txt
โ””โ”€โ”€ ... (other files)

2. Launch the Demo

PYTHONPATH=./ python src/demo/app.py

โš ๏ธ Note:

  • First run will download datasets from HuggingFace automatically (About 30GB in total)
  • Demo runs on CPU by default
  • Access the interface at http://127.0.0.1:7860 (or the URL shown in terminal)

๐Ÿ“Š PatchSAE Training and Analysis

๐Ÿ“ Status Updates

  • Jan 13, 2025: Training & Analysis code work properly. Minor error in data loading by class when using ImageNet.
  • Jan 09, 2025: Analysis code works. Updated training with evaluation during training, fixed optimizer bug.
  • Jan 07, 2025: Added analysis code. Reproducibility tests completed (trained on ImageNet, tested on Oxford-Flowers).
  • Jan 06, 2025: Training code updated. Reproducibility testing in progress.
  • Jan 02, 2025: Training code incomplete in this version. Updates coming soon.

๐Ÿ“œ License & Credits

Reference Implementations

License Notice

Our code is distributed under an MIT license, please see the LICENSE file for details. The NOTICE file lists license for all third-party code included in this repository. Please include the contents of the LICENSE and NOTICE files in all re-distributions of this code.


Citation

If you find our code or models useful in your work, please cite our paper:

@inproceedings{
  lim2025sparse,
  title={Sparse autoencoders reveal selective remapping of visual concepts during adaptation},
  author={Hyesu Lim and Jinho Choi and Jaegul Choo and Steffen Schneider},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025},
  url={https://openreview.net/forum?id=imT03YXlG2}
}