- Neural Architecture Search: A Survey
- Efficient Progressive Neural Architecture Search
- Complementary Binary Quantization for Joint Multiple Indexing
- Efficient DNN Neuron Pruning by Minimizing Layer-wise Nonlinear Reconstruction Error
- Accelerating Convolutional Networks via Global and Dynamic Filter Pruning
- Progressive Blockwise Knowledge Distillation for Neural Network Acceleration
- Where to Prune: Using LSTM to Guide End-to-end Pruning
- Structured Probabilistic Pruning for Convolutional Neural Network Acceleration
- Distilling the Knowledge in a Neural Network
- Do Deep Convolutional Nets Really Need to be Deep and Convolutional?
- Variational Dropout Sparsifies Deep Neural Networks
- Like What You Like: Knowledge Distill via Neuron Selectivity Transfer
- Learning Intrinsic Sparse Structures within Long Short-Term Memory
- PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
- NISP: Pruning Networks using Neuron Importance Score Propagation
- Learning Global Additive Explanations for Neural Nets Using Model Distillation
- Regularized Evolution for Image Classifier Architecture Search
- Efficient Sparse-Winograd Convolutional Neural Networks
- Attention-Based Guided Structured Sparsity of Deep Neural Networks
- Born-Again Neural Networks
- A Neurobiological Evaluation Metric for Neural Network Model Search
- HAQ: Hardware-Aware Automated Quantization with Mixed Precision
- Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation
- Towards Federated Learning at Scale: System Design
- The State of Sparsity in Deep Neural Networks
- Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search
- MFAS: Multimodal Fusion Architecture Search
- Luck Matters: Understanding Training Dynamics of Deep ReLU Networks
- Filter Grafting for Deep Neural Networks
- Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks
- Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
- DropNet: Reducing Neural Network Complexity via Iterative Pruning
- Deep Neural Networks for Object Detection
- Coreset-Based Neural Network Compression
- A theory of learning from different domains
- "Learning-Compression" Algorithms for Neural Net Pruning
- Compressing Neural Networks with the Hashing Trick
- Context-aware Deep Feature Compression for High-speed Visual Tracking
- Contrastive Representation Distillation
- Low-rank Compression of Neural Nets: Learning the Rank of Each Layer
- ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation
- Learning to generate chairs with convolutional neural networks
- Dynamic Model Pruning with Feedback
- Structured Multi-Hashing for Model Compression
- Ensemble Distribution Distillation
- Discrete Model Compression with Resource Constraint for Deep Neural Networks
- Multi-Dimensional Pruning: A Unified Framework for Model Compression
- Online Knowledge Distillation via Collaborative Learning
- The Knowledge Within: Methods for Data-Free Model Compression
- Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
- Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing
- A Diversity-Penalizing Ensemble Training Method for Deep Learning
- Low-rank Compression of Neural Nets: Learning the Rank of Each Layer
- Knowledge Consistency between Neural Networks and Beyond
- Structured Compression by Weight Encryption for Unstructured Pruning and Quantization
- Few Sample Knowledge Distillation for Efficient Network Compression
- GAN Compression: Efficient Architectures for Interactive Conditional GANs
- Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
- HRank: Filter Pruning using High-Rank Feature Map
- Neural Network Pruning with Residual-Connections and Limited-Data
- ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
- Discrimination-aware Channel Pruning for Deep Neural Networks
- Frequency-Domain Dynamic Pruning for Convolutional Neural Networks
- Variational Mutual Information Distillation for Transfer Learning
- Bayesian Dark Knowledge
- CNNpack: Packing Convolutional Neural Networks in the Frequency Domain
- Sobolev Training for Neural Networks
- Learning the Structure of Deep Architectures Using l1 Regularization
- Heterogeneous Knowledge Distillation using Information Flow Modeling
- Picking Winning Tickets Before Training by Preserving Gradient Flow
- SNAS: Stochastic Neural Architecture Search
- A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
- Towards Evolutionary Compression
- APQ: Joint Search for Network Architecture, Pruning and Quantization Policy
- MineGAN: effective knowledge transfer from GANs to target domains with few images
- Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
- Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based Approach
- Distilling Knowledge from Graph Convolutional Networks
- Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN
- Distilling Cross-Task Knowledge via Relationship Matching
- A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
- Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
- Revisiting Knowledge Distillation via Label Smoothing Regularization
- Regularizing Class-wise Predictions via Self-knowledge Distillation
- Data-Driven Sparse Structure Selection for Deep Neural Networks
- Less is More: Towards Compact CNNs
- Towards Effective Low-bitwidth Convolutional Neural Networks
-
Notifications
You must be signed in to change notification settings - Fork 0
manjunath5496/Efficient-DNN-Paper-List
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
"If you need inspiration, don't do it."― Elon Musk
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published