Skip to content
@mit-han-lab

MIT HAN Lab

Efficient AI Computing. PI: Song Han

Pinned Loading

  1. streaming-llm Public

    [ICLR 2024] Efficient Streaming Language Models with Attention Sinks

    Python 6.9k 382

  2. llm-awq Public

    [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

    Python 3k 247

  3. efficientvit Public

    Efficient vision foundation models for high-resolution generation and perception.

    Python 2.8k 218

  4. bevfusion Public archive

    [ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation

    Python 2.6k 465

  5. temporal-shift-module Public

    [ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding

    Python 2.1k 421

  6. once-for-all Public

    [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment

    Python 1.9k 341

Repositories

Showing 10 of 60 repositories
  • nunchaku Public

    [ICLR2025 Spotlight] SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models

    Cuda 1,550 Apache-2.0 87 42 4 Updated Apr 27, 2025
  • vila-u Public

    [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation

    Python 306 MIT 8 18 0 Updated Apr 25, 2025
  • efficientvit Public

    Efficient vision foundation models for high-resolution generation and perception.

    Python 2,827 Apache-2.0 218 103 0 Updated Apr 24, 2025
  • Python 815 Apache-2.0 13 82 1 Updated Apr 22, 2025
  • llm-awq Public

    [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

    Python 2,961 MIT 246 154 8 Updated Apr 14, 2025
  • x-attention Public

    XAttention: Block Sparse Attention with Antidiagonal Scoring

    Python 141 6 2 1 Updated Mar 29, 2025
  • deepcompressor Public

    Model Compression Toolbox for Large Language Models and Diffusion Models

    Python 446 Apache-2.0 33 46 1 Updated Mar 28, 2025
  • omniserve Public

    [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention

    C++ 647 Apache-2.0 42 40 4 Updated Mar 5, 2025
  • torchsparse Public

    [MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

    Cuda 1,324 MIT 157 36 2 Updated Feb 24, 2025
  • torchquantum Public

    A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers.

    Jupyter Notebook 1,453 MIT 218 61 (4 issues need help) 10 Updated Feb 21, 2025