Sample codes and guidelines on how to finetune any opensource GPT models using #deepspeed and #huggingface
-
Updated
Mar 31, 2023
Sample codes and guidelines on how to finetune any opensource GPT models using #deepspeed and #huggingface
This repository presents a gemstone classification project employing Transfer Learning with MobileNetV2, processing a dataset comprising 3200+ images spanning 87 classes. TensorFlow and Keras facilitated data preprocessing, augmentation, and model training. Through fine-tuning and leveraging pre-trained features.
Explore the rich flavors of Indian desserts with TunedLlavaDelights. Utilizing the in Llava fine-tuning, our project unveils detailed nutritional profiles, taste notes, and optimal consumption times for beloved sweets. Dive into a fusion of AI innovation and culinary tradition
Tryna Create a Digital Version of Me via StableDiffusion - LoRA
Unser GitHub-Repository fördert die Entwicklung von GPT für die Pflegebegutachtung, um Genauigkeit und Effizienz in der Pflege zu verbessern. Es bietet spezialisierte Datensätze, Benchmarking-Tools und Validierungscodes für Innovatoren von KI und Pflege. Beteiligen Sie sich, um die Pflegebegutachtung durch Technologie voranzutreiben.
T5-base model is finetunned for abstract2title
AlgoTrading101 Warren Buffett Chatbot with ChatGPT and OpenAI Whisper
Code for reproducing the paper Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction to appear at The 7th Workshop on Noisy User-generated Text (W-NUT) organized at EMNLP 2021.
A Human Computation Based Dream-Interpreter using GPT-3
Elemental Planes Image Collection Enhancement- Earn Sage Points for cleaning up images.
Classification of flowers , by finetuning Resnets and Inception models also image augmentation and random image erasing
Scalable Protein Language Model Finetuning with Distributed Learning and Advanced Training Techniques such as LoRA.
Image Classification, Image Feature Extraction, CNNs, Finetuning, Resnet18, Torchvision, Multi-Class Logistic Regression
This Midjourney prompt generator makes digital creators life easier by generating some specific prompts for Midjourney which enables them to generate more accurate and realistic images as per their needs.
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
It is a simple machine learning algorithm to convert Grayscale Images to Colored Images. It uses VGG-16 model. We have finetuned this model and made it accurate for this algorithm to convert images from Grayscale to RGB.
Fine Tuning is a cost-efficient way of preparing a model for specialized tasks. Fine-tuning reduces required training time as well as training datasets. We have open-source pre-trained models. Hence, we do not need to perform full training every time we create a model.
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
Finetuning an LLM for structured data extraction from press releases
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."