A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
-
Updated
Mar 25, 2023 - Python
A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility
A coding-free framework built on PyTorch for reproducible deep learning studies. 🏆25 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet
Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)
Play deep learning with CIFAR datasets
Pretrained TorchVision models on CIFAR10 dataset (with weights)
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
A PyTorch implementation of SimCLR based on ICML 2020 paper "A Simple Framework for Contrastive Learning of Visual Representations"
[ICCV 2019] "AutoGAN: Neural Architecture Search for Generative Adversarial Networks" by Xinyu Gong, Shiyu Chang, Yifan Jiang and Zhangyang Wang
3.41% and 17.11% error on CIFAR-10 and CIFAR-100
Pretrained models on CIFAR10/100 in PyTorch
Naszilla is a Python library for neural architecture search (NAS)
Non-negative Positive-Unlabeled (nnPU) and unbiased Positive-Unlabeled (uPU) learning reproductive code on MNIST and CIFAR10
Bottleneck Transformers for Visual Recognition
Speech commands recognition with PyTorch | Kaggle 10th place solution in TensorFlow Speech Recognition Challenge
Various CNN models for CIFAR10 with Chainer
Add a description, image, and links to the cifar10 topic page so that developers can more easily learn about it.
To associate your repository with the cifar10 topic, visit your repo's landing page and select "manage topics."