Skip to content
Collection of recent methods on DNN compression and acceleration
TeX
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
LICENSE Initial commit Sep 16, 2018
README.md Update Aug 1, 2019
references.bib update Jul 11, 2019

README.md

EfficientDNNs

A collection of recent methods on DNN compression and acceleration. There are mainly 5 kinds of methods for efficient DNNs:

  • neural architecture re-designing or searching
    • maintain accuracy, less cost (e.g., #Params, #FLOPs, etc.): MobileNet, ShuffleNet etc.
    • maintain cost, more accuracy: Inception, ResNeXt, Xception etc.
  • pruning (including structured and unstructured)
  • quantization
  • matrix decomposition
  • knowledge distillation

About abbreviation: In the list below, o for oral, w for workshop, s for spotlight, b for best paper.

Papers

Papers-Advesarial Attacks

Papers-Interpretability

Papers-Knowledge Distillation

People (in alphabeta order)

Venues

Lightweight DNN Engines/APIs

Related Repos and Websites

News

You can’t perform that action at this time.