A cutting-edge system using the RAG approach to enhance document retrieval and question answering with pre-trained models.
-
Updated
Oct 27, 2024 - Python
A cutting-edge system using the RAG approach to enhance document retrieval and question answering with pre-trained models.
A curated list of NLP resources focused on Transformer networks, attention mechanism, GPT, BERT, ChatGPT, LLMs, and transfer learning.
The Paper List on Data Contamination for Large Language Models Evaluation.
LingLong (玲珑): a small-scale Chinese pretrained language model
Must-read Papers on Knowledge Editing for Large Language Models.
Synthesizing programs to link visually-rich document entities. This is the replication code for VRDSynth paper, accepted in ISSTA'24
Question and answer generation (QAG) is a natural language processing (NLP) task that generates a question and an answer in the same time by using context information. The input context can be represented in form of structured information in a database or raw text. The outputs of QAG systems can be directly applied to several NLP applications...
Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding (Findings of EMNLP'23)
The official GitHub page for the survey paper "A Survey of Large Language Models".
Fine-tuning an encoder-decoder transformer (ViT-Base-Patch16-224-In21k and DistilGPT2) for image captioning on the COCO dataset
A Pre-trained Language Model for Semantic Similarity Measurement of Persian Informal Short Texts
[ICLR 2023] Multimodal Analogical Reasoning over Knowledge Graphs
RoBERTa中文预训练模型: RoBERTa for Chinese
SLS : Neural Information Retrieval(IR)-based Semantic Search model
An Open-Source Framework for Prompt-Learning.
An Open-sourced Knowledgable Large Language Model Framework.
Official repository for NAACL'24 paper: TrojFSP: Trojan Insertion in Few-shot Prompt Tuning
An Empirical Evaluation of Pre-trained Large Language Models for Repairing Declarative Formal Specifications
Awesome papers on Language-Model-as-a-Service (LMaaS)
[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback
Add a description, image, and links to the pre-trained-language-models topic page so that developers can more easily learn about it.
To associate your repository with the pre-trained-language-models topic, visit your repo's landing page and select "manage topics."