SII-OpenMOSS
OpenMOSS presents a collection of our research on LLMs, supported by SII, Fudan and Mosi.
Pinned Loading
Repositories
Showing 10 of 14 repositories
- ReAttention Public
[ICLR2025] ReAttention, a training-free approach to break the maximum context length in length extrapolation
- Thus-Spake-Long-Context-LLM Public
a survey of long-context LLMs from four perspectives, architecture, infrastructure, training, and evaluation
- Language-Model-SAEs Public
For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.
- Ultra-Innerthought Public
Ultra-Innerthought is a bilingual (Chinese and English) open-domain R1/o1 style SFT dataset.
- TransformerLens Public Forked from TransformerLensOrg/TransformerLens
A library for mechanistic interpretability of GPT-style language models
- LongSafety Public