Skip to content
View MingSun-Tse's full-sized avatar
๐Ÿค‘
Get rich!
๐Ÿค‘
Get rich!

Highlights

  • Pro
Block or Report

Block or report MingSun-Tse

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
MingSun-Tse/README.md

Hi there ๐Ÿ‘‹

I am a Ph.D. candidate at SMILE Lab of Northeastern University (Boston, USA). Before that, I spent seven wonderful years at Zhejiang Univeristy (Hangzhou, China) to get my B.E. and M.S. degrees.

I am interested in a variety of topics in computer vision and machine learning. My research works orbit efficient deep learning (a.k.a. model compression), spanning from the most common image classifcation task (GReg, Awesome-PaI, TPP) to neural style transfer (Collaborative-Distillation), single image super-resolution (ASSL, SRP), and 3D novel view synthesis (R2L, MobileR2L).

I do my best towards easily reproducible research.

๐Ÿ”ฅ NEWS: [NeurIPS'23] We are excited to present SnapFusion, a super-efficient mobile diffusion model that can do text-to-image generation in less than 2s๐Ÿš€ on mobile devices! [Arxiv] [Webpage]
๐Ÿ”ฅ NEWS: [CVPR'23] Check out our new blazing fast๐Ÿš€ neural rendering model on mobile devices: MobileR2L (the lightweight version of R2L), can render 1008x756 images at 56fps on iPhone13 [Arxiv] [Code]
๐Ÿ”ฅ NEWS: [ICLR'23] Check out the very first trainability-preserving filter pruning method: TPP [Arxiv] [Code]
๐Ÿ”ฅ NEWS: Check out our preprint work that deciphers the so confusing benchmark situation in neural network (filter) pruning: [Arxiv] [Code]
โœจ NEWS: Check out our investigation of what makes a "good" data augmentation in knowledge distillation, in NeurIPS 2022: [Webpage] [Code]
โœจ NEWS: Check out our Efficient NeRF project via distillation, in ECCV 2022: [R2L]

Github stats

Pinned

  1. Efficient-Deep-Learning Efficient-Deep-Learning Public

    Collection of recent methods on (deep) neural network compression and acceleration.

    895 128

  2. snap-research/R2L snap-research/R2L Public

    [ECCV 2022] R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

    Python 182 22

  3. Collaborative-Distillation Collaborative-Distillation Public

    [CVPR'20] Collaborative Distillation for Ultra-Resolution Universal Style Transfer (PyTorch)

    Python 185 23

  4. Regularization-Pruning Regularization-Pruning Public

    [ICLR'21] Neural Pruning via Growing Regularization (PyTorch)

    Python 77 22

  5. Good-DA-in-KD Good-DA-in-KD Public

    [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective

    Python 34 2

  6. ASSL ASSL Public

    [NeurIPS'21 Spotlight] Aligned Structured Sparsity Learning for Efficient Image Super-Resolution (PyTorch)

    Python 58 7