Skip to content

ENSTA-U2IS-AI/LDU

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LDU: Latent Discriminant deterministic Uncertainty

PyTorch implementation for Latent Discriminant deterministic Uncertainty (ECCV 2022).
Paper

Abstract

In this work we advance a scalable and effective Deterministic Uncertainty Methods (DUM) that relaxes the Lipschitz constraint typically hindering practicality of such architectures. We learn a discriminant latent space by leveraging a distinction maximization layer over an arbitrarily-sized set of trainable prototypes.

Overview of LDU: the DNN learns a discriminative latent space thanks to learnable prototypes. The DNN backbone computes a feature vector z for an input x and then the DM layer matches it with the prototypes. The computed similarities reflecting the position of z in the learned feature space, are subsequently processed by the classification layer and the uncertainty estimation layer. The dashed arrows point to the loss functions that need to be optimized for training LDU.

image

For more details, please refer to our paper.

Note

We currently only provide the codes for toy example, classification and monocular depth estimation.
The semantic segmentation part will be released in near future.

Experiment

Toy example

We provide a toy example for illustrating LDU on two-moon dataset.

Open In Colab

Monocular depth estimation example

In folder monocular_depth_estimation/, we provide the codes and instructions for LDU applying on monocular depth estimation task. The detailed information is shown on monocular_depth_estimation/README.md.

TODO

  • Add classification codes

Citation

If you find this work useful for your research, please consider citing our paper:

@article{franchi2022latent,
  title={Latent Discriminant deterministic Uncertainty},
  author={Franchi, Gianni and Yu, Xuanlong and Bursuc, Andrei and Aldea, Emanuel and Dubuisson, Severine and Filliat, David},
  journal={arXiv preprint arXiv:2207.10130},
  year={2022}
}