Skip to content

EduardaCaldeira/compression_bias_survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Model Compression Techniques in Biometrics Applications: A Survey

Official repository for the paper "Model Compression Techniques in Biometrics Applications: A Survey".

Abstract

The development of deep learning algorithms has extensively empowered humanity's task automatization capacity. However, the huge improvement in the performance of these models is highly correlated with their increasing level of complexity, limiting their usefulness in human-oriented applications, which are usually deployed in resource-constrained devices. This led to the development of compression techniques that drastically reduce the computational and memory costs of deep learning models without significant performance degradation. This paper aims to systematize the current literature on this topic by presenting a comprehensive survey of model compression techniques in biometrics applications, namely quantization, knowledge distillation and pruning. We conduct a critical analysis of the comparative value of these techniques, focusing on their advantages and disadvantages and presenting suggestions for future work directions that can potentially improve the current methods. Additionally, we discuss and analyze the link between model bias and model compression, highlighting the need to direct compression research toward model fairness in future works.

Quantization:

Knowledge Distillation

Pruning

  • Biometrics

    • Channel-level acceleration of deep face representations
    • Graph-based dynamic ensemble pruning for facial expression recognition
    • Discrimination-aware network pruning for deep model compression
    • Squeezerfacenet: Reducing a small face recognition cnn even more via filter pruning
    • Ipad: Iterative pruning with activation deviation for sclera biometrics
  • Computer Vision

    • Pruning filters for efficient convnets
    • To prune, or not to prune: exploring the efficacy of pruning for model compression
    • The lottery ticket hypothesis: Finding sparse, trainable neural networks
    • Nisp: Pruning networks using neuron importance score propagation
    • Snip: Single-shot network pruning based on connection sensitivity

Compression-induced Bias

Citation

If you use our work in your research, please cite with:

@article{caldeira2024model,
  title={Model Compression Techniques in Biometrics Applications: A Survey},
  author={Caldeira, Eduarda and Neto, Pedro C and Huber, Marco and Damer, Naser and Sequeira, Ana F},
  journal={arXiv preprint arXiv:2401.10139},
  year={2024}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published