Low Tensor Rank adaptation of large language models
-
Updated
Jun 9, 2024 - Python
Low Tensor Rank adaptation of large language models
This repository highlights the LLMs reasoning capabilities of ✨ Mistral / LLaMA-3 / Phi-3 / Gemma / Flan-T5 / GPT-4o ✨ in Targeted Sentiment Analysis in Russian / Translated to English mass-media 📊
microsoft/Phi-3-vision-128k-instruct for Apple MLX
Train transformer-based models.
Code for paper "FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via Selective Tensor Freezing"
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
A simple Product Recommendation System.
Unify Efficient Fine-Tuning of 100+ LLMs
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT, and Bio_ClinicalBERT. Achieved F1 scores of 0.73 and 0.77 for BIOES and BIO tagging models, respectively.
High quality resources & applications for LLMs, multi-modal models and VectorDBs
This repo contains a list of channels and sources from where LLMs should be learned
LlamaIndex is a data framework for your LLM applications
Magick is a cutting-edge toolkit for a new kind of AI builder. Make Magick with us!
The open-source serverless GPU container runtime.
Easy multi-task learning with HuggingFace Datasets and Trainer
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
Add a description, image, and links to the fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the fine-tuning topic, visit your repo's landing page and select "manage topics."