Use a meta-network to learn the importance and correlation of neural network weights
-
Updated
Mar 23, 2019 - Python
Use a meta-network to learn the importance and correlation of neural network weights
Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf
Compressed CNNs for airplane classification in satellite images (APoZ-based parameter pruning, INT8 weight quantization)
Code for our WACV 2021 paper "Exploiting the Redundancy in Convolutional Filters for Parameter Reduction"
Bayesian Optimization-Based Global Optimal Rank Selection for Compression of Convolutional Neural Networks, IEEE Access
Code for testing DCT plus Sparse (DCTpS) networks
Neural Network Pruning Using Dependency Measures
[ICML 2018] "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"
[ICLR 2022] "Audio Lottery: Speech Recognition Made Ultra-Lightweight, Noise-Robust, and Transferable", by Shaojin Ding, Tianlong Chen, Zhangyang Wang
ESPN: Extreme Sparse Pruned Network
Tools and libraries to run neural networks in Minecraft ⛏️
Compact representations of convolutional neural networks via weight pruning and quantization
This repository is for reproducing the results shown in the NNCodec ICML Workshop paper. Additionally, it includes a demo, prepared for the Neural Compression Workshop (NCW).
[ICLR 2023] Pruning Deep Neural Networks from a Sparsity Perspective
Neural network compression with SVD
Official PyTorch implementation of "Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming" (ICML'23)
An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
Add a description, image, and links to the neural-network-compression topic page so that developers can more easily learn about it.
To associate your repository with the neural-network-compression topic, visit your repo's landing page and select "manage topics."