- Jeju, South Korea
-
01:01
(UTC +09:00) - https://www.linkedin.com/in/sanghwakim/
AI
Locally run an Instruction-Tuned Chat-Style LLM
Making large AI models cheaper, faster and more accessible
Stable Diffusion web UI
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
An open-source framework for training large multimodal models.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
fast-stable-diffusion + DreamBooth
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
[ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
An unofficial Visual Studio Code - OpenAI ChatGPT integration
GPT RStudio addins that enable GPT assisted coding, writing & analysis
Talk to ChatGPT AI using your voice and listen to its answers through a voice
The fastai book, published as Jupyter Notebooks
A Deep Learning Approach for Password Guessing (https://arxiv.org/abs/1709.00440)
Official repo for consistency models.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
Simple UI for LLM Model Finetuning
Using Low-rank adaptation to quickly fine-tune diffusion models.
Instruct-tune LLaMA on consumer hardware
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.