Skip to content
View Guangxuan-Xiao's full-sized avatar
Attention is all we need
Attention is all we need

Highlights

  • Pro
Block or Report

Block or report Guangxuan-Xiao

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Guangxuan-Xiao/README.md

Hi there 👋

Pinned

  1. mit-han-lab/streaming-llm mit-han-lab/streaming-llm Public

    [ICLR 2024] Efficient Streaming Language Models with Attention Sinks

    Python 6.3k 354

  2. mit-han-lab/smoothquant mit-han-lab/smoothquant Public

    [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

    Python 1.1k 117

  3. mit-han-lab/fastcomposer mit-han-lab/fastcomposer Public

    FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

    Python 605 34

  4. mit-han-lab/offsite-tuning mit-han-lab/offsite-tuning Public

    Offsite-Tuning: Transfer Learning without Full Model

    Python 361 36

  5. torch-int torch-int Public

    This repository contains integer operators on GPUs for PyTorch.

    Python 146 49

  6. thunlp/NeuBA thunlp/NeuBA Public

    Python 20 4