This repository is a collection of useful links related to datasets for Language Model models (LLMs) and Reinforcement Learning with Human Feedback (RLHF).
It includes a variety of open datasets, as well as tools, pre-trained models, and research papers that can help researchers and developers work with LLMs and RLHF from a data perspective.
Follow and star for the latest and greatest links related to datasets for LLMs and RLHF.
1.2 Trillion tokens Dataset in English:
Dataset | Token Count |
---|---|
Commoncrawl | 878 Billion |
C4 | 175 Billion |
GitHub | 59 Billion |
Books | 26 Billion |
ArXiv | 28 Billion |
Wikipedia | 24 Billion |
StackExchange | 20 Billion |
Total | 1.2 Trillion |
Also includes code for data preparation, deduplication, tokenization, and visualization.
Created by Ontocord.ai, MILA QuΓ©bec AI Institute, ETH DS3Lab, UniversitΓ© de MontrΓ©al, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
Overview: A collection of open source foundation models ranging in size from 7B to 65B parameters released by Meta AI.
License: Non-commercial bespoke (model), GPL-3.0 (code)
π Release blog post π arXiv publication π Model card
Overview: A 13B parameter open source chatbot model fine-tuned on LLaMA and ~70k ChatGPT conversations that maintains 92% of ChatGPTβs performance and outperforms LLaMA and Alpaca.
License: Non-commercial bespoke license (model), Apache 2.0 (code).
π¦ Repo
π Release blog post
π ShareGPT dataset
π€ Models
π€ Gradio demo
Overview: A fully open source 12B parameter instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.
License: CC BY-SA 3.0 (model), CC BY-SA 3.0 (dataset), Apache 2.0 (code).
π¦ Repo
π Release blog post
π€ Models
Overview: A multi-modal LLM that combines a vision encoder and Vicuna for general-purpose visual and language understanding, with capabilities similar to GPT-4.
License: Non-commercial bespoke (model), CC BY NC 4.0 (dataset), Apache 2.0 (code).
π¦ Repo
π Project homepage
π arXiv publication
π€ Dataset & models
π€ Gradio demo
Overview: A suite of low-parameter (3B, 7B) LLMs trained on a new dataset built on The Pile, with 1.5 trillion tokens of content.
License: CC BY-SA-4.0 (models).
π¦ Repo
π Release blog post
π€ Models
π€ Gradio demo
Overview: A partially open source instruction-following model fine-tuned on LLaMA which is smaller and cheaper and performs similarly to GPT-3.5.
License: Non-commercial bespoke (model), CC BY-NC 4.0 (dataset), Apache 2.0 (code).
π Release blog post
π€ Dataset