-
42dot (@42dot, @hkmc-airlab)
- Pangyo, Gyeonggi, Republic of Korea
- https://docs.ykstyle.info
Stars
A pipeline for LLM knowledge distillation
[ACL 2024 Findings] Deep Exploration of Cross-Lingual Zero-Shot Generalization in Instruction Tuning
Generate textbook-quality synthetic LLM pretraining data
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
한국어 뉴스의 긍정, 부정이 레이블링 된 금융 뉴스 문장 감성 분석 데이터셋 (finance sentiment corpus) 입니다.
RL algorithm: Advantage induced policy alignment
Korea Investment & Securities Open API Github https://apiportal.koreainvestment.com
A Gradio web UI for Large Language Models with support for multiple inference backends.
Example models using DeepSpeed
Self-Alignment with Principle-Following Reward Models
Fast and memory-efficient exact attention
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to user prompts and supports both languages simultaneously by t…
ollmer / mmlu
Forked from hendrycks/testMeasuring Massive Multitask Language Understanding | ICLR 2021
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
High-Resolution Image Synthesis with Latent Diffusion Models
Kill Zscaler without password or jail Zscaler in a virtual machine
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)
A framework for few-shot evaluation of language models.