Skip to content

Material for course "Optimizing Artificial Intelligence" at IMT Atlantique

Notifications You must be signed in to change notification settings

soufi4ne75/ai-optim

 
 

Repository files navigation

Repository for the course "Optimizing Artificial Intelligence" at IMT Atlantique

Course organisation / Syllabus

Here is a detailed schedule, session by session:

  1. Introduction / Refresher on Deep Learning

    1. General Intro - Why do we need to optimize deep learning ? Introduction of the MicroNet Challenge.
    2. Course - Deep Learning and Transfer Learning.
    3. Practical session - introduction to PyTorch, transfer learning.
    4. Short project - exploring hyper parameters on a fixed architecture
  2. Quantification

    1. Short evaluation on Deep Learning Essentials
    2. Student's presentation of short project - exploring hyper parameters on a fixed architecture
    3. Course - Quantifying Deep neural networks
    4. Practical session - quantification on a small convolutional network
    5. Long project 1 - MicroNet Challenge
  3. Pruning

    1. Short evaluation on Quantification
    2. Course - Pruning Deep neural networks
    3. Practical session - pruning on a small convolutional network.
    4. Long project 2 - MicroNet Challenge
  4. Factorization

    1. Short evaluation on Pruning
    2. Student's presentation on current work on MicroNet
    3. Course - Factorizing Deep neural networks
    4. Practical session - factorizing a small convolutional network
    5. Long Project 3 - MicroNet Challenge
  5. Factorisation - Part 2 - Operators and Architectures

    1. Course - Factorization Pt2, alternative operators and efficient architectures
    2. Long Project 5 - MicroNet Challenge
  6. Distillation

    1. Short evaluation on Factorization Pt1 and Pt2 and previous courses
    2. Course - Distillation of knowledge and features between neural networks
    3. Long Project 4 - MicroNet Challenge
  7. Embedded Software and Hardware for Deep Learning

    1. Short evaluation on Distillation
    2. Course - Embedded Software and Hardware for Deep Learning
    3. Long Project 6 - MicroNet Challenge
  8. Final Session

    1. Short evaluation on embedded software and hardware for Deep Learning
    2. Long Project 7 - MicroNet Challenge
    3. Student's presentation - Final results on MicroNet

Evaluation in this course

There are short written evaluations during the first 10 minutes of each session starting from session 2. Don't be late!

For the final session, we ask you to prepare a 20 minutes presentation, that will be followed by 10 Minutes of question.

You'll find in the micronet-ressources folder, presentations from the winners of the 2019, and rules for the 2020 challenge.

General References

List of references IMT Atlantique and AI

Amazon Book - Dive into Deep learning

Tutorial presentation on Efficient Deep Learning from NeurIPS'19

Training Deep Networks

Here are some academic papers discussing learning rate strategies :

Main strategies are readily available in pytorch.

Pytorch

Start page to access the full python API of pytorch, to check all existing functions.

A useful tutorial on Saving and Loading models.

Pytorch Cheat Sheet.

Data Augmentation

Popular methods :

Cut Out

Auto Augment

Other ressources :

A list of papers and code for data augmentation

IMGAUG and Colab Notebook showing how to use IMGAUG with pytorch

A popular python package in Kaggle competitions : Albumentations

Quantization

Binary Connect

XnorNet

BNN+

Whitepaper of quantization

Pruning

Pruning Filters for Efficient ConvNets

ThiNet

AutoML for Model Compression (AMC)

Pruning Channel with Attention Statistics (PCAS)

BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization

Factorization and operators

Deep Compression

Deep K-means

SqueezeNet

MobileNet

MobileNetV2

Shift Attention Layers

Distillation

Distilling the knowledge in a neural network

Fitnets: Hints for thin deep nets

LIT: Learned Intermediate Representation Training for Model Compression

A Comprehensive Overhaul of Feature Distillation

And the bit goes down: Revisiting the quantization of neural networks

Embedded Software and Hardware

See references section of Tutorial presentation on Efficient Deep Learning from NeurIPS'19.

Companies / private sector

13 highest funded startups for hardware for DL

More complete list of companies working on hardware DL

About

Material for course "Optimizing Artificial Intelligence" at IMT Atlantique

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.7%
  • Python 2.3%