Low-code framework for building custom LLMs, neural networks, and other AI models
-
Updated
Jun 1, 2024 - Python
Low-code framework for building custom LLMs, neural networks, and other AI models
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
Code examples and resources for DBRX, a large language model developed by Databricks
DLRover: An Automatic Distributed Deep Learning System
irresponsible innovation. Try now at https://chat.dev/
The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
Tune LLM in few lines of code
Sequence Parallel Attention for Long Context LLM Model Training and Inference
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Fast modular code to create and train cutting edge LLMs
A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
本项目旨在结合以往研究人员的代表性工作,从多个维度评估sft数据,并自动化过滤sft数据。
Add a description, image, and links to the llm-training topic page so that developers can more easily learn about it.
To associate your repository with the llm-training topic, visit your repo's landing page and select "manage topics."