Use AWS Rekognition to train custom models that you own.
-
Updated
Oct 27, 2023 - Python
Use AWS Rekognition to train custom models that you own.
Use LLaMA to label data for use in training a fine-tuned LLM.
Autodistill Google Cloud Vision module for use in training a custom, fine-tuned model.
Model distillation of CNNs for classification of Seafood Images in PyTorch
The Codebase for Causal Distillation for Task-Specific Models
Repository for the publication "AutoGraph: Predicting Lane Graphs from Traffic"
A framework for knowledge distillation using TensorRT inference on teacher network
The Codebase for Causal Distillation for Language Models (NAACL '22)
Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)
Matching Guided Distillation (ECCV 2020)
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform human-designed algorithms
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"
Images to inference with no labeling (use foundation models to train supervised models).
Add a description, image, and links to the model-distillation topic page so that developers can more easily learn about it.
To associate your repository with the model-distillation topic, visit your repo's landing page and select "manage topics."