Skip to content

zhouzhengguang/network-compression-papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 

Repository files navigation

Network compression and Acceleration Papers

Survey papers

  1. Efficient processing of deep neural networks: A tutorial and survey. Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel Emer. paper
  2. A survey of model compression and acceleration for deep neural networks. Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. paper
  3. An analysis of deep neural network models for practical applications. Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. paper
  4. Deep convolutional neural networks for image classiffication: A comprehensive review. Waseem Rawat and Zenghui Wang. paper




Knowledge distilling

  1. Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. paper
  2. Fitnets: Hints for thin deep nets. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio.
  3. Harnessing deep neural networks with logic rules. Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing.
  4. Do deep nets really need to be deep? Jimmy Ba and Rich Caruana.
  5. Do deep convolutional nets really need to be deep and convolutional? Gregor Urban, Krzysztof J Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose, and Matt Richardson.
  6. Transferring knowledge from a rnn to a dnn. William Chan, Nan Rosemary Ke, and Ian Lane.
  7. Face model compression by distilling knowledge from neurons. Ping Luo, Zhenyao Zhu, Ziwei Liu, Xiaogang Wang, Xiaoou Tang, et al.
  8. Like what you like: Knowledge distill via neuron selectivity transfer. Zehao Huang and Naiyan Wang.
  9. Darkrank: Accelerating deep metric learning via cross sample similarities transfer. Yuntao Chen, Naiyan Wang, and Zhaoxiang Zhang.
  10. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. Sergey Zagoruyko and Nikos Komodakis.
  11. Accelerating convolutional neural networks with dominant convolutional kernel and knowledge pre-regression. Zhenyang Wang, Zhidong Deng, and Shiyao Wang.
  12. Rocket launching: A universal and efficient framework for training well-performing light net. Guorui Zhou, Ying Fan, Runpeng Cui, Weijie Bian, Xiaoqiang Zhu, and Kun Gai.

Network pruning

  1. Optimal brain damage. Yann LeCun, John S Denker, and Sara A Solla.
  2. Second order derivatives for network pruning: Optimal brain surgeon. Babak Hassibi, David G Stork, et al.
  3. Learning both weights and connections for efficient neural network. Song Han, Jeff Pool, John Tran, and William Dally.
  4. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. Song Han, Huizi Mao, and William J Dally.
  5. Dsd: Dense-sparse-dense training for deep neural networks. Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, et al.
  6. Data-free parameter pruning for deep neural networks. Suraj Srinivas and R Venkatesh Babu.
  7. Dynamic network surgery for efficient dnns. Yiwen Guo, Anbang Yao, and Yurong Chen.
  8. Faster cnns with direct sparse convolutions and guided pruning. Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, and Pradeep Dubey.
  9. Pruning convolutional neural networks for resource efficient transfer learning. Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz.
  10. Pruning filters for efficient convnets. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf.
  11. Structured pruning of deep convolutional neural networks. Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung.
  12. A simple yet effective method to prune dense layers of neural networks. Mohammad Babaeizadeh, Paris Smaragdis, and Roy H Campbell.
  13. Designing energy-efficient convolutional neural networks using energy-aware pruning. Tien-Ju Yang, Yu-Hsin Chen, and Vivienne Sze.
  14. Bayesian compression for deep learning. Christos Louizos, Karen Ullrich, and Max Welling.
  15. Weight uncertainty in neural networks. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra.
  16. An entropy-based pruning method for cnn compression. Jian-Hao Luo and Jianxin Wu.
  17. Exploring the regularity of sparse structure in convolutional neural networks. Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, and William J Dally.
  18. Fast convnets using group-wise brain damage. Vadim Lebedev and Victor Lempitsky.
  19. Thinet: A filter level pruning method for deep neural network compression. Jian-Hao Luo, Jianxin Wu, and Weiyao Lin.
  20. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang.
  21. Accelerating deep learning with shrinkage and recall. Shuai Zheng, Abhinav Vishnu, and Chris Ding.
  22. Prune the convolutional neural networks with sparse shrink. Xin Li and Changsong Liu.
  23. Neuron pruning for compressing deep networks using maxout architectures. Fernando Moya Rueda, Rene Grzeszick, and Gernot A Fink.
  24. Fine-pruning: Joint fine-tuning and compression of a convolutional network with bayesian optimization. Frederick Tung, Srikanth Muralidharan, and Greg Mori.
  25. Structured bayesian pruning via log-normal multiplicative noise. Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov.
  26. Towards evolutional compression. Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, and Dacheng Tao.
  27. Lazy evaluation of convolutional filters. Sam Leroux, Steven Bohez, Cedric De Boom, Elias De Coninck, Tim Verbelen, Bert Vankeirsbilck, Pieter Simoens, and Bart Dhoedt.
  28. Sparsely-connected neural networks: Towards efficient vlsi implementation of deep neural networks. Arash Ardakani, Carlo Condo, andWarren J Gross.
  29. Net-trim: A layer-wise convex pruning of deep neural networks. Alireza Aghasi, Nam Nguyen, and Justin Romberg.
  30. Learning with confident examples: Rank pruning for robust classification with noisy labels. Curtis G. Northcutt, Tailin Wu, and Isaac L. Chuang.
  31. Compact deep convolutional neural networks with coarse pruning. Sajid Anwar and Wonyong Sung.
  32. Towards thinner convolutional neural networks through gradually global pruning. Zhengtao Wang, Ce Zhu, Zhiqiang Xia, Qi Guo, and Yipeng Liu.
  33. The incredible shrinking neural network: New perspectives on learning representations through the lens of pruning. Nikolas Wolfe, Aditya Sharma, Lukas Drude, and Bhiksha Raj.
  34. Training skinny deep neural networks with iterative hard thresholding methods. Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan.
  35. Reducing the model order of deep neural networks using information theory. Ming Tu, Visar Berisha, Yu Cao, and Jae Sun Seo.
  36. Learning efficient convolutional networks through network slimming. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang.
  37. Channel pruning for accelerating very deep neural networks. Yihui He, Xiangyu Zhang, and Jian Sun.
  38. Incomplete dot products for dynamic computation scaling in neural network inference. H. T. Kung Bradley McDanel, Surat Teerapittayanon.
  39. To prune, or not to prune: exploring the efficacy of pruning for model compression. Michael Zhu and Suyog Gupta.
  40. Data-driven sparse structure selection for deep neural networks. Zehao Huang and Naiyan Wang.
  41. Pruning convnets online for efficient specialist models. Jia Guo and Miodrag Potkonjak.
  42. Sparse convolutional neural networks. Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky.
  43. Group sparse regularization for deep neural networks. Simone Scardapane, Danilo Comminiello, Amir Hussain, and Aurelio Uncini.
  44. The power of sparsity in convolutional neural networks. Soravit Changpinyo, Mark Sandler, and Andrey Zhmoginov.
  45. Spatially-sparse convolutional neural networks. Benjamin Graham.
  46. Shakeout: A new approach to regularized deep neural network training. Guoliang Kang, Jun Li, and Dacheng Tao.
  47. Sparse activity and sparse connectivity in supervised learning. Markus Thom and Gunther Palm.
  48. Learning structured sparsity in deep neural networks. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li.
  49. Perforatedcnns: Acceleration through elimination of redundant convolutions. Mikhail Figurnov, Aizhan Ibraimova, Dmitry P Vetrov, and Pushmeet Kohli.
  50. Training compressed fully-connected networks with a density-diversity penalty. Shengjie Wang, Haoran Cai, Jeff Bilmes, and William Noble.
  51. Stochasticnet: Forming deep neural networks via stochastic connectivity. Mohammad Javad Shafiee, Parthipan Siva, and Alexander Wong.
  52. Deep roots: Improving cnn efficiency with hierarchical filter groups. Yani Ioannou, Duncan Robertson, Roberto Cipolla, and Antonio Criminisi.
  53. Less is more: Towards compact cnns. Hao Zhou, Jose M Alvarez, and Fatih Porikli.
  54. More is less: A more complicated network with less inference complexity. Xuanyi Dong, Junshi Huang, Yi Yang, and Shuicheng Yan.
  55. Memory bounded deep convolutional networks. Maxwell D Collins and Pushmeet Kohli.
  56. Combined group and exclusive sparsity for deep neural networks. Jaehong Yoon and Sung Ju Hwang.
  57. On compressing deep models by low rank and sparse decomposition. Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao.
  58. Speeding up convolutional neural networks by exploiting the sparsity of rectifier units. Shaohuai Shi and Xiaowen Chu.
  59. Alternating direction method of multipliers for sparse convolutional neural networks. Farkhondeh Kiaee, Christian Gagn, and Mahdieh Abbasi.
  60. Training sparse neural networks. Suraj Srinivas, Akshayvarun Subramanya, and R. Venkatesh Babu.
  61. Dyvedeep: Dynamic variable effort deep neural networks. Balaraman Ravindran Anand Raghunathan SanjayGanapathy, Swagath Venkataramani.
  62. Freezeout: Accelerate training by progressively freezing layers. Andrew Brock, Theodore Lim, J. M. Ritchie, and Nick Weston.
  63. Convolutional neural networks at constrained time cost. Kaiming He and Jian Sun.




Network quantization

  1. Fixed-point feedforward deep neural network design using weights+1, 0, and- 1. Kyuyeon Hwang and Wonyong Sung.
  2. Fixed point quantization of deep convolutional networks. Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy.
  3. Binaryconnect: Training deep neural networks with binary weights during propagations. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David.
  4. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio.
  5. Quantized neural networks: Training neural networks with low precision weights and activations. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio.
  6. Xnor-net: Imagenet classification using binary convolutional neural networks. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi.
  7. Bitwise neural networks. Minje Kim and Paris Smaragdis.
  8. Training quantized nets: A deeper understanding. Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein.
  9. Shiftcnn: Generalized low-precision architecture for inference of convolutional neural networks. Denis A. Gudovskiy and Luca Rigazio.
  10. Gated xnor networks: Deep neural networks with ternary weights and activations under a unified discretization framework. Lei Deng, Peng Jiao, Jing Pei, Zhenzhi Wu, and Guoqi Li.
  11. The high-dimensional geometry of binary neural networks. Alexander G Anderson and Cory P Berg.
  12. Compressing deep convolutional networks using vector quantization. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev.
  13. Compression of deep neural networks on the fly. Guillaume Soulie, Vincent Gripon, and Maelys Robert.
  14. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou.
  15. Ternary weight networks. Fengfu Li, Bo Zhang, and Bin Liu.
  16. Trained ternary quantization. Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally.
  17. Incremental network quantization: Towards lossless cnns with low-precision weights. Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen.
  18. Quantized convolutional neural networks for mobile devices. Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng.
  19. Compressing neural networks with the hashing trick. Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, and Yixin Chen.
  20. Scalable and sustainable deep learning via randomized hashing. Ryan Spring and Anshumali Shrivastava.
  21. Functional hashing for compressing neural networks. Lei Shi, Shikun Feng, et al.
  22. Compressing convolutional neural networks. Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen.
  23. Compressing convolutional neural networks in the frequency domain. Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen.
  24. Neural networks with few multiplications. Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio.
  25. Training binary multilayer neural networks for image classification using expectation backpropagation. Zhiyong Cheng, Daniel Soudry, Zexi Mao, and Zhenzhong Lan.
  26. Improving the speed of neural networks on cpus. Vincent Vanhoucke, Andrew Senior, and Mark Z Mao.
  27. Soft weight-sharing for neural network compression. Karen Ullrich, Edward Meeds, and Max Welling.
  28. Towards the limit of network quantization. Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee.
  29. Tensorizing neural networks. Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov.
  30. Deep learning with limited numerical precision. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan.
  31. Finite precision error analysis of neural network hardware implementations. Jordan L Holi and J-N Hwang.
  32. Deep learning with low precision by halfwave gaussian quantization. Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos.
  33. Deep quantization: Encoding convolutional activations with deep generative model. Zhaofan Qiu, Ting Yao, and Tao Mei.
  34. Weighted-entropy-based quantization for deep neural networks. Junwhan Ahn Eunhyeok Park and Sungjoo Yoo.
  35. Extremely low bit neural network: Squeeze the last bit out with admm. Cong Leng, Hao Li, Shenghuo Zhu, and Rong Jin.
  36. Learning accurate low-bit deep neural networks with stochastic quantization. Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, and Hang Su.
  37. Adaptive weight compression for memory-efficient neural networks. Jong Hwan Ko, Duckhwan Kim, Taesik Na, Jaeha Kung, and Saibal Mukhopadhyay.
  38. Balanced quantization: An effective and efficient approach to quantized neural networks. Shu-Chang Zhou, Yu-Zhi Wang, He Wen, Qin-Yao He, and Yu-Heng Zou.
  39. Energy-efficient convnets through approximate computing. Bert Moons, Bert De Brabandere, Luc Van Gool, and Marian Verhelst.
  40. Binarized convolutional neural networks with separable filters for efficient hardware acceleration. Jeng Hau Lin, Tianwei Xing, Ritchie Zhao, Zhiru Zhang, Mani Srivastava, Zhuowen Tu, and Rajesh K. Gupta.
  41. Performance guaranteed network acceleration via high-order residual quantization. Wenjun Zhang XiaoKang Yang Wen Gao Zefan Li, Bingbing Ni.
  42. Loss-aware binarization of deep networks. Lu Hou, Quanming Yao, and James T Kwok.
  43. Bitnet: Bit-regularized deep neural networks. Aswin Raghavan, Mohamed Amer, and Sek Chai.
  44. Analytical guarantees on numerical precision of deep neural networks. Charbel Sakr, Yongjune Kim, and Naresh Shanbhag.
  45. Mixed low-precision deep learning inference using dynamic fixed point. Naveen Mellempudi, Abhisek Kundu, Dipankar Das, Dheevatsa Mudigere, and Bharat Kaul.
  46. Understanding the impact of precision quantization on the accuracy and energy of neural networks. Sherief Reda, Sherief Reda, Sherief Reda, Sherief Reda, and Sherief Reda.
  47. Soft-to-hard vector quantization for end-to-end learned compression of images and neural networks. Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, and Luc Van Gool.
  48. Espresso: Efficient forward propagation for bcnns. Fabrizio Pedersoli, George Tzanetakis, and Andrea Tagliasacchi.
  49. Intra-layer nonuniform quantization of convolutional neural network. Fangxuan Sun, Jun Lin, and Zhongfeng Wang.
  50. Scalable compression of deep neural networks. Xing Wang and Jie Liang.
  51. Embedded binarized neural networks. Bradley Mcdanel, Surat Teerapittayanon, and H. T Kung.






Low rank approximation

  1. Predicting parameters in deep learning. Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al.
  2. Learning separable filters. Roberto Rigamonti, Amos Sironi, Vincent Lepetit, and Pascal Fua.
  3. Speeding up convolutional neural networks with low rank expansions. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman.
  4. Exploiting linear structure within convolutional networks for efficient evaluation. Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus.
  5. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky.
  6. Efficient and accurate approximations of nonlinear convolutional networks. Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun.
  7. Compression of fully-connected layer in neural network by kronecker product. Shuchang Zhou and Jia-Nan Wu.
  8. Restructuring of deep neural network acoustic models with singular value decomposition. Jian Xue, Jinyu Li, and Yifan Gong.
  9. An exploration of parameter redundancy in deep networks with circulant projections. Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu Chang.
  10. Convolutional neural networks with low-rank regularization. Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al.
  11. Deep fried convnets. Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang.
  12. Training cnns with low-rank filters for efficient image classification. Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi.
  13. Factorized convolutional neural networks. Min Wang, Baoyuan Liu, and Hassan Foroosh.
  14. Compression of deep convolutional neural networks for fast and low power mobile applications. Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin.
  15. Accelerating convolutional neural networks for mobile applications. Peisong Wang and Jian Cheng.
  16. Decomposeme: Simplifying convnets for end-to-end learning. Jose Alvarez and Lars Petersson.
  17. Structured transforms for small-footprint deep learning. Vikas Sindhwani, Tara Sainath, and Sanjiv Kumar.
  18. Design of efficient convolutional layers using single intrachannel convolution, topological subdivisioning and spatial bottleneck structure. Min Wang, Baoyuan Liu, and Hassan Foroosh.
  19. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran.
  20. Low precision neural networks using subband decomposition. Sek Chai, Aswin Raghavan, David Zhang, Mohamed Amer, and Tim Shields.
  21. Beyond filters: Compact feature map for portable deep model. Yunhe Wang, Chang Xu, Chao Xu, and Dacheng Tao.
  22. Theoretical properties for neural networks with weight matrices of low displacement rank. Liang Zhao, Siyu Liao, Yanzhi Wang, Jian Tang, and Bo Yuan.
  23. Coordinating filters for faster deep neural networks. Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li.
  24. Ultimate tensorization: compressing convolutional and fc layers alike. Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, and Dmitry Vetrov.
  25. Simplifying deep neural networks for neuromorphic architectures. Jaeyong Chung and Taehwan Shin.
  26. Network sketching: Exploiting binary structure in deep cnns. Yiwen Guo, Anbang Yao, Hao Zhao, and Yurong Chen.
  27. Improving efficiency in convolutional neural network with multilinear filters. Dat Thanh Tran, Alexandros Iosifidis, and Moncef Gabbouj.
  28. Analysis and design of convolutional networks via hierarchical tensor decompositions. Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, and Amnon Shashua.
  29. Structured convolution matrices for energye efficient deep learning. Rathinakumar Appuswamy, Tapan Nayak, John Arthur, Steven Esser, Paul Merolla, Jeffrey Mckinstry, Timothy Melano, Myron Flickner, and Dharmendra Modha.
  30. Circnn: Accelerating and compressing deep neural networks using block-circulantweight matrices. Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, and Geng Yuan.









Dynamic computation

  1. Dyvedeep: Dynamic variable effort deep neural networks. Sanjay Ganapathy, Swagath Venkataramani, Balaraman Ravindran, and Anand Raghunathan.
  2. Spatially adaptive computation time for residual networks. Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov.







Compact network design

  1. Mobilenets: Efficient convolutional neural networks for mobile vision applications. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam.
  2. Flattened convolutional neural networks for feedforward acceleration. Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello.
  3. Lcnn: Lookup-based convolutional neural network. Hessam Bagherinezhad, Mohammad Rastegari, and Ali Farhadi.
  4. Local binary convolutional neural networks. Felix Juefei-Xu, Vishnu Naresh Boddeti, and Marios Savvides.
  5. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer.
  6. A compact dnn: Approaching googlenet-level accuracy of classification and domain adaptation. Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, and Hai Li.
  7. Shufflenet: An extremely efficient convolutional neural network for mobile devices. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun.
  8. Deep simnets. Nadav Cohen, Or Sharir, and Amnon Shashua.
  9. Densely connected convolutional networks. Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten.
  10. Genetic cnn. Lingxi Xie and Alan Yuille.
  11. Sep-nets: Small and effective pattern networks. Zhe Li, Xiaoyu Wang, Xutao Lv, and Tianbao Yang.
  12. Learning the structure of deep convolutional networks. Jiashi Feng and Trevor Darrell.
  13. Convolutional neural fabrics. Shreyas Saxena and Jakob Verbeek.





Others

  1. Group equivariant convolutional networks. Taco S Cohen and Max Welling.
  2. Doubly convolutional neural networks. Shuangfei Zhai, Yu Cheng, Weining Lu, and Zhongfei Zhang.
  3. Understanding and improving convolutional neural networks via concatenated rectified linear units. Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee.
  4. Multi-bias non-linear activation in deep neural networks. Hongyang Li, Wanli Ouyang, and Xiaogang Wang.
  5. Exploiting cyclic symmetry in convolutional neural networks. Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu.
  6. Building correlations between filters in convolutional neural networks. H. Wang, P. Chen, and S Kwong.
  7. Fast training of convolutional networks through ffts. Michael Mathieu, Mikael Henaff, and Yann LeCun.
  8. Fast algorithms for convolutional neural networks. Andrew Lavin and Scott Gray.
  9. Fast convolutional nets with fbfft: A gpu performance evaluation. Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun.
  10. Eie: efficient inference engine on compressed deep neural network. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally.
  11. Cnnpack: packing convolutional neural networks in the frequency domain. Yunhe Wang, Chang Xu, Shan You, Dacheng Tao, and Chao Xu.
  12. Low-memory gemm-based convolution algorithms for deep neural networks. Andrew Anderson, Aravind Vasudevan, Cormac Keane, and David Gregg.
  13. Convolution in convolution for network in network. Y. Pang, M. Sun, X. Jiang, and X. Li.



About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published