Skip to content

Science des données Saison 4 : Apprentissage en grande dimension, Données fonctionnelles, Détection d'anomalies, Introduction au Deep Learning

Notifications You must be signed in to change notification settings

wikistat/High-Dimensional-Deep-Learning

Repository files navigation


This repository is no longer maintained and has moved to plmlab.math.cnrs.fr/wikistat/High-Dimensional-Deep-Learning.



High Dimensional and Deep Learning

Presentation :

The main theme of the course is learning methods, especially deep neural networks, for processing high dimensional data, such as signals or images. We will cover the following topics:

  • Neural networks and introduction to deep learning: definition of neural networks, activation functions, multilayer perceptron, backpropagation algorithms, optimization algorithms, regularization.
    Application : Implementation of a mlp with one layer with Numpy

  • Convolutional neural networks: convolutional layer, pooling, dropout, convolutional network architectures (ResNet, Inception), transfer learning and fine tuning, applications for image or signal classification, applications for objects localization and detection.
    Application 1 : Image classification on MNIST and CatsVsDogs data with Tensorflow
    Application 2 : Objects localization and detection through CNNs

  • Encoder-decoder, Variational auto-encoder, Generative adversarial networks

  • Functional decomposition on splines, Fourier or wavelets bases: cubic splines, penalized least squares criterion, Fourier basis, wavelet bases, applications to nonparametric regression, linear estimators and nonlinear estimators by thresholding, links with the LASSO method.

  • Anomaly detection for functional data: One Class SVM, Random Forest, Isolation Forest, Local Outlier Factor. Applications to anomaly detection in functional data.

  • Recurrent Neural Networks
    Application : Sentiment analysis through Recurrent Neural Networks


Installing required packages

The file environment.yml contains a list of all the packages you need to run the notebooks in this repository. To install them, run the following command in your terminal:

conda env create -f environment.yml

Then, activate the environment:

conda activate hddl

or

source activate hddl

Organisation :

  • Lectures : 9 H .

  • Practical works : 28 H applications on real data sets with Python's libraries Scikit Learn and Keras -Tensorflow.

Evaluation

  • written exam (50 %) -

  • project (oral presentation 25% + notebook (25%)
    The main objective of this project is to apply the knowledge you acquired during this course by:

    • Selecting a deep learning algorithm you haven't seen in this course.
    • Explaining how this algorithm works both in a notebook and an oral presentation. The notebook must explain in details how the method principles and the experimental procedure
    • Apply this algorithm on a different dataset and discuss on the obtained results (notebook and oral presentation).

You can choose a deep learning algorithm among the following list.
This list is not exhaustive and you can suggest other algorithms (that's actually a good idea).
Also, the code proposed on those examples are not necessarily the official code nor the one proposed by the authors.

Please register in the following document


Example of algorithms

  • Detection & segmentation

    • Focal Loss for Dense Object Detection paper, code
    • Mask R-CNN paper, code
    • EfficientDet: Scalable and Efficient Object Detection paper, code
  • One shot learning

  • Style Transfer

  • Generative model

    • Pixel Cnn/++ paper, code
    • Importance Weighted Autoencoders paper, code
    • Gan variation paper, code
    • NetGAN without GAN: From Random Walks to Low-Rank Approximations paper, code
    • Denoising Diffusion Probabilistic Model paper, code
  • Unsupervised learning:

    • Supervized contrastive learning paper, code
    • Bootstrap your own latent: A new approach to self-supervised Learning paper, code
    • A Simple Framework for Contrastive Learning of Visual Representations paper, code
    • Barlow Twins: Self-Supervised Learning via Redundancy Reduction paper, code
    • Exploring Simple Siamese Representation Learning paper, code
    • Unsupervised Representation Learning by Predicting Image Rotations paper, code
    • Self-supervised Label Augmentation via Input Transformations paper, code
    • A Simple Framework for Contrastive Learning of Visual Representations paper, code
  • Fairness

    • Achieving Equalized Odds by Resampling Sensitive Attributes paper, code
  • Domain adaptation/generalisation:

    • Domain Generalization by Solving Jigsaw Puzzles paper, code
    • Domain-Adversarial Training of Neural Networks paper, code
  • Regularization:

    • mixup: Beyond Empirical Risk Minimization paper, code
    • Unsupervised Data Augmentation for Consistency Training paper, code
    • FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper, code
    • Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity paper, code
  • Time series

    • GP-VAE: Deep Probabilistic Time Series Imputation paper, code
    • N-BEATS: Neural basis expansion analysis for interpretable time series forecasting paper, code
  • Others:
    • Self-training with Noisy Student improves ImageNet classification paper, code
    • Patches Are All You Need? paper, code

About

Science des données Saison 4 : Apprentissage en grande dimension, Données fonctionnelles, Détection d'anomalies, Introduction au Deep Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •