The official GitHub page for the survey paper "A Survey of Large Language Models".
-
Updated
May 19, 2024 - Python
The official GitHub page for the survey paper "A Survey of Large Language Models".
PyTorch implementation of VALL-E(Zero-Shot Text-To-Speech), Reproduced Demo https://lifeiteng.github.io/valle/index.html
An open-source framework for training large multimodal models.
Painter & SegGPT Series: Vision Foundation Models from BAAI
Emu Series: Generative Multimodal Models from BAAI
[ICLR 2023] Code for the paper "Binding Language Models in Symbolic Languages"
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.
Papers and Datasets on Instruction Tuning and Following. ✨✨✨
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
[ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"
🎁[ChatGPT4NLU] A Comparative Study on ChatGPT and Fine-tuned BERT
[ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.
A curated list of awesome instruction tuning datasets, models, papers and repositories.
Code for ICCV 2023 Paper : “ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction”
[NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"
Taking advantage of LlamaIndex's in-context learning paradigm, LlamaDoc empowers users to input PDF documents and pose any questions related to the content. The tool leverages the LLama Index's reasoning capabilities to provide intelligent responses based on the contextual understanding of the LLM.
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
Add a description, image, and links to the in-context-learning topic page so that developers can more easily learn about it.
To associate your repository with the in-context-learning topic, visit your repo's landing page and select "manage topics."