Skip to content

MXHX7199/SearchPaper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 

Repository files navigation

DeepHash

The articles related to DeepHash

Single-Modal Deep Hashing Methods

(AAAI 2014) Supervised Hashing via Image Representation Learning paper code

(CVPR 2015) Simultaneous Feature Learning and Hash Coding with Deep Neural Networks paper

(TIP 2015) Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification paper code

(IJCAI 2015) Convolutional Neural Networks for Text Hashing paper

(CVPR 2015) Deep Semantic Ranking Based Hashing for Multi-Label Image Retrieval paper code

(AAAI 2015) Deep Hashing for Compact Binary Codes Learning paper

(CVPR 2015) Deep Learning of Binary Hash Codes for Fast Image Retrieval paper code

(IJCAI 2016) Feature Learning based Deep Supervised Hashing with Pairwise Labels paper code

(AAAI 2016) Deep Hashing Network for Efficient Similarity Retrieval paper code

(AAAI 2016) Deep Quantization Network for Efficient Image Retrieval paper code

(CVPR 2016) Deep Supervised Hashing for Fast Image Retrieval paper code

(SIGIR 2017) Deep Semantic Hashing with Generative Adversarial Networks paper

(CVPR 2017) Deep Visual-Semantic Quantization for Efficient Image Retrieval paper code

(ICCV 2017) HashNet: Deep Learning to Hash by Continuation paper code

(CVPR 2018) HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN paper

(CVPR 2018) Deep Cauchy Hashing for Hamming Space Retrieval paper code

(CVPR 2018) Hashing as Tie-Aware Learning to Rank paper code

(ECCV 2018) Hashing with Binary Matrix Pursuit paper code

(TPAMI 2019) Hashing with Mutual Information paper code

PIM

The articles related to PIM

ReRAM

Simulator

(TCAD 2012) NVSim: A Circuit-Level Performance, Energy, and Area Model for Emerging Nonvolatile Memory

(DATE 2016) MNSIM: Simulation platform for memristor-based neuromorphic computing system

(TCAD 2018) NeuroSim: A circuit-level macro model for benchmarking neuro-inspired architectures in online learning

Paper

(ISCA 2016) ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars

(ISCA 2016) PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory

(HPCA 2017) PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning

(DAC 2018) SNrram: An Efficient Sparse Neural Network Computation Architecture Based on Resistive Random-Access Memory

(ICCAD 2018) DL-RSIM: A simulation framework to enable reliable ReRAM-based accelerators for deep learning

(ASPDAC 2019) Learning the sparsity for ReRAM: mapping and pruning sparse neural network for ReRAM based accelerator

(ASPDAC 2019) CompRRAE: RRAM-based convolutional neural network accelerator with reduced computations through a runtime activation estimation

(VLSI 2019) SubMac: Exploiting the subword-based computation in RRAM-based CNN accelerator for energy saving and speedup

(Yiran Chen'2019) ReBNN: in-situ acceleration of binarized neural networks in ReRAM using complementary resistive cell

(ISCA 2019) Sparse ReRAM engine: joint exploration of activation and weight sparsity in compressed neural networks

(TCAD 2020) SemiMap: A Semi-Folded Convolution Mapping for Speed-Overhead Balance on Crossbars

(Nature 2020) Fully hardware-implemented memristor convolutional neural network

(ISCA 2020) GaaS-X: Graph Analytics Accelerator Supporting Sparse Data Representation using Crossbar Architectures

(TCAD 2020) Long Live TIME: Improving Lifetime and Security for NVM-Based Training-in-Memory Systems

SRAM

(ISCA 2018) Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks

(HPCA 2019) Bit Prudent In-Cache Acceleration of Deep Convolutional Neural Networks


QuantPaper

The articles related to quantization

Basic Network

(AlexNet) ImageNet Classification with Deep Convolutional Neural Networks

(VGG) Very Deep Convolutional Networks for Large-Scale Image Recognition / homepage

(GoogLeNet) Going deeper with convolutions

(ResNet) Deep Residual Learning for Image Recognition / slides

(ShuffleNet) ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

(MobileNet V1) MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

(MobileNet V2) MobileNetV2: Inverted Residuals and Linear Bottlenecks

(ICCV 2019) HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-Precision

(ICCV 2019) Unsupervised Neural Quantization for Compressed-Domain Similarity Search

(ICCV 2019) Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks

(ICCV 2019) Learning Filter Basis for Convolutional Neural Network Compression

(ICCV 2019) Learning Filter Basis for Convolutional Neural Network Compression

(CVPR 2019) Quantization Networks

(CVPR 2019) Fully Quantized Network for Object Detection

(CVPR 2019) HAQ: Hardware-Aware Automated Quantization with Mixed Precision Song Han

(CVPR 2019) Simultaneously Optimizing Weight and Quantizer of Ternary Neural Networkusing Truncated Gaussian Approximation

(CVPR 2019) SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsitythrough Low-Bit Quantization

(CVPR 2018) Quantization and Training of Neural Networks for EfficientInteger-Arithmetic-Only Inference Google

(ECCV 2018) Quantization Mimic: Towards Very Tiny CNNfor Object Detection

(ECCV 2018) LQ-Nets: Learned Quantization for HighlyAccurate and Compact Deep Neural Networks

(ECCV 2018) Value-aware Quantizationfor Training and Inference of Neural Networks

NIPS, ICLR, etc. LINK

(ICLR 2019) Smart Ternary Quantization

(ICLR 2019) Relaxed Quantization for Discretized Neural Networks

(ICLR 2019) ProxQuant: Quantized Neural Networks via Proximal Operators

(ICLR 2019) Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

(ICLR 2019) From Hard to Soft: Understanding Deep Network Nonlinearities via Vector Quantization and Statistical Inference

(ICLR 2019) Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm

(ICLR 2019) Analysis of Quantized Models

(ICLR 2019) Defensive Quantization: When Efficiency Meets Robustness Song Han

(ICLR 2018) Mixed Precision Training of Convolutional Neural Networks using Integer Operations Intel

(NIPS 2019) Focused Quantization for Sparse CNNs

(NIPS 2019) A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

(NIPS 2019) Post training 4-bit quantization of convolutional networks for rapid-deployment

(NIPS 2019) Generalization Error Analysis of Quantized Compressive Learning

(NIPS 2019) Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization

(NIPS 2019) Double Quantization for Communication-Efficient Distributed Optimization

(ICML 2019) Improving Neural Network Quantization without Retraining using Outlier Channel Splitting

(AAAI 2019) Multi‐Precision Quantized Neural Networks via Encoding Decomposition of {-1,+1}

(AAAI 2019) Efficient Quantization for Compact Neural Networks with Binary Weights and Low Bitwidth Activations

(AAAI 2019) Deep Neural Network Quantization via Layer-Wise Optimization Using Limited Training Data

(IJCAI 2019) KCNN: Kernel-wise Quantization to Remarkably Decrease Multiplications in Convolutional Neural Network

(IJCAI 2019) Binarized Neural Networks for Resource-Efficient Hashing with Minimizing Quantization Loss

(IJCAI 2019) KCNN: Kernel-wise Quantization to Remarkably Decrease Multiplications in Convolutional Neural Network

About

The articles related to quantization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published