This project is currently taken down. My apologies.
This repository has been archived by the owner on Nov 29, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 26
PaulPauls/llama3_interpretability_sae
About
A complete end-to-end pipeline for LLM interpretability with sparse autoencoders (SAEs) using Llama 3.2, written in pure PyTorch and fully reproducible.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published