[J. Imaging 2023] The official repository for paper CL3: Generalization of Contrastive Loss for Lifelong Learning J. Imaging 2023, 9(12), 259; https://doi.org/10.3390/jimaging9120259
-
Updated
Jun 18, 2024 - Python
[J. Imaging 2023] The official repository for paper CL3: Generalization of Contrastive Loss for Lifelong Learning J. Imaging 2023, 9(12), 259; https://doi.org/10.3390/jimaging9120259
A comprehensive ablation study conducted on the CIFAR-100 dataset. Three deep learning architectures: Convolutional Neural Networks (CNN), Gated Multilayer Perceptrons (gMLP), and Vision Transformers (ViT) are utilized. The project leverages PyTorch and PyTorch Lightning for model training and Optuna for hyperparameter tuning
A lightweight and extensible toolbox for image classification
Official PyTorch Implementation for the "Distilling Datasets Into Less Than One Image" paper.
A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
[AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"
Trained a multiclass classifer network using cifar100 dataset
Knowledge Distillation from VGG16 (teacher model) to MobileNet (student model)
classifying CIFAR-100 data set using MCSVM and Deep Conv Net
sensAI: ConvNets Decomposition via Class Parallelism for Fast Inference on Live Data
PyTorch implementation of 'ViT' (Dosovitskiy et al., 2020) and training it on CIFAR-10 and CIFAR-100
This project explores diverse approaches to image classification on the CIFAR-100 dataset. Starting from traditional CNNs combined with KNN classifiers, it progresses to ResNet50 with FCNN and culminates in the cutting-edge Vision Transformer (ViT) model.
IJCAI 2024, InfoMatch: Entropy neural estimation for semi-supervised image classification
Models with variable output classes designed for CIFAR-100
Feather is a module that enables effective sparsification of neural networks during training. This repository accompanies the paper "Feather: An Elegant Solution to Effective DNN Sparsification" (BMVC2023).
This repository includes official implementation and model weights of Data-Efficient Multi-Scale Fusion Vision Transformer.
Two case studies: effects of changing the learning rate on model perfomance for image classificaiton, and cardiac failure prediction using clinical data
Implementaiton of BSC-Densenet-121 in Pytorch from research paper "Adding Binary Search Connections to Improve DenseNet Performance".
Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)
Kleines CNN zur Klassifizierung des CIFAR-10 Datensatzes.
Add a description, image, and links to the cifar100 topic page so that developers can more easily learn about it.
To associate your repository with the cifar100 topic, visit your repo's landing page and select "manage topics."