Set up your environment with these simple steps:
# Create and activate environment
conda create --name patchsae python=3.12
conda activate patchsae
# Install dependencies
pip install -r requirements.txt
# Always set PYTHONPATH before running any scripts
cd patchsae
PYTHONPATH=./ python src/demo/app.py
Explore our pre-computed images and SAE latents without any installation!
๐ก The demo may experience slowdowns due to network constraints. For optimal performance, consider disabling your VPN if you encounter any delays.
Want to experiment with your own images? Follow these steps:
First, download the necessary files:
You can download the files using gdown
as follows:
# Activate environment first (see Getting Started)
# Download necessary files (35MB + 513MB)
gdown --id 1NJzF8PriKz_mopBY4l8_44R0FVi2uw2g # out.zip
gdown --id 1reuDjXsiMkntf1JJPLC5a3CcWuJ6Ji3Z # data.zip
# Extract files
unzip data.zip
unzip out.zip
๐ก Need
gdown
? Install it with:conda install conda-forge::gdown
Your folder structure should look like:
patchsae/
โโโ configs/
โโโ data/ # From data.zip
โโโ out/ # From out.zip
โโโ src/
โ โโโ demo/
โ โโโ app.py
โโโ tasks/
โโโ requirements.txt
โโโ ... (other files)
PYTHONPATH=./ python src/demo/app.py
- First run will download datasets from HuggingFace automatically (About 30GB in total)
- Demo runs on CPU by default
- Access the interface at http://127.0.0.1:7860 (or the URL shown in terminal)
- Training Instructions: See tasks/README.md
- Analysis Notebooks:
- Jan 13, 2025: Training & Analysis code work properly. Minor error in data loading by class when using ImageNet.
- Jan 09, 2025: Analysis code works. Updated training with evaluation during training, fixed optimizer bug.
- Jan 07, 2025: Added analysis code. Reproducibility tests completed (trained on ImageNet, tested on Oxford-Flowers).
- Jan 06, 2025: Training code updated. Reproducibility testing in progress.
- Jan 02, 2025: Training code incomplete in this version. Updates coming soon.
- SAE for ViT
- SAELens
- Differentiable and Fast Geometric Median in NumPy and PyTorch
- Self-regulating Prompts: Foundational Model Adaptation without Forgetting [ICCV 2023]
- Used in:
configs/
andmsrc/models/
- Used in:
- MaPLe: Multi-modal Prompt Learning CVPR 2023
- Used in:
configs/models/maple/...yaml
anddata/clip/maple/imagenet/model.pth.tar-2
- Used in:
Our code is distributed under an MIT license, please see the LICENSE file for details. The NOTICE file lists license for all third-party code included in this repository. Please include the contents of the LICENSE and NOTICE files in all re-distributions of this code.
If you find our code or models useful in your work, please cite our paper:
@inproceedings{
lim2025sparse,
title={Sparse autoencoders reveal selective remapping of visual concepts during adaptation},
author={Hyesu Lim and Jinho Choi and Jaegul Choo and Steffen Schneider},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=imT03YXlG2}
}