Skip to content

chrischute/papers

Repository files navigation

Papers - Summer 2018

Collection of papers for Stanford ML Group's summer 2018 reading group. Titles for each week are listed below, and more information on each paper can be found in the README file of each week's subdirectory. Discussions focused on numbered papers—bulleted papers were optional reading.

I. ResNets

  1. Deep Residual Learning for Image Recognition
  2. Identity Mappings in Deep Residual Networks
  3. Wide Residual Networks
  4. Densely Connected Convolutional Networks
  5. Aggregated Residual Transformations for Deep Neural Networks

II. Normalization

  1. Batch Normalization: Accelerating Deep Network Training by Reducing
  2. Weight Normalization: A Simple Reparameterization to Accelerate Training
  3. Layer Normalization
  4. Instance Normalization: The Missing Ingredient for Fast Stylization
  5. Group Normalization

III. R-CNN

  1. Rich feature hierarchies for accurate object detection and semantic
  2. Fast R-CNN
  3. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal
  4. Mask R-CNN

IV. Transfer Learning

  1. Representation Learning: A Review and New Perspectives
  2. Why Does Unsupervised Pre-training Help Deep Learning?
  3. Taskonomy: Disentangling Task Transfer Learning
  4. How transferable are features in deep neural networks?

V. Segmentation

  1. Fully Convolutional Networks for Semantic Segmentation
  2. U-Net: Convolutional Networks for Biomedical Image Segmentation
  3. Feature Pyramid Networks for Object Detection
  4. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets,
  5. Encoder-Decoder with Atrous Separable Convolution for Semantic Image

VI. Adversarial Examples

  1. Intriguing properties of neural networks
  2. Explaining and Harnessing Adversarial Examples
  3. Distilling the Knowledge in a Neural Network
  4. Distillation as a Defense to Adversarial Perturbations against Deep
  5. Practical Black-Box Attacks against Machine Learning
  6. Adversarial examples in the physical world

VII. Generative Models

  1. Auto-Encoding Variational Bayes
  2. Generative Adversarial Networks
  3. NICE: Non-linear Independent Components Estimation
  4. Pixel Recurrent Neural Networks
  5. Conditional Image Generation with PixelCNN Decoders
  6. Glow: Generative Flow with Invertible 1x1 Convolutions

IIX. Neural Architecture Search

  1. Neural Architecture Search with Reinforcement Learning
  2. Population Based Training of Neural Networks
  3. Learning Transferable Architectures for Scalable Image Recognition
  4. Efficient Neural Architecture Search via Parameter Sharing
  5. Neural Architecture Optimization
  6. Efficient Neural Architecture Search with Network Morphism

IX. Uncertainty

  1. Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  2. Simple and Scalable Predictive Uncertainty Estimation using Deep
  3. What Uncertainties Do We Need in Bayesian Deep Learning for Computer
  4. Learning Confidence for Out-of-Distribution Detection in Neural Networks
  5. Leveraging uncertainty information from deep neural networks for disease detection

X. Attention

  1. Augmented RNNs (Blogpost)
  2. Neural Machine Translation by Jointly Learning to Align and Translate
  3. Show, Attend and Tell: Neural Image Caption Generation with Visual
  4. Effective Approaches to Attention-based Neural Machine Translation
  5. Attention Is All You Need

XI. Efficient Neural Networks

  1. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  2. Quantized Neural Networks: Training Neural Networks with Low Precision
  3. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB
  4. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  5. ShuffleNet: An Extremely Efficient Convolutional Neural Network for

About

Papers for Summer 2018

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published