A curated list of pretrained sentence and word embedding models
-
Updated
Apr 23, 2021 - Python
A curated list of pretrained sentence and word embedding models
Foundation Architecture for (M)LLMs
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
中文法律LLaMA (LLaMA for Chinese legel domain)
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
Code associated with the Don't Stop Pretraining ACL 2020 paper
Live Training for Open-source Big Models
MWPToolkit is an open-source framework for math word problem(MWP) solvers.
Papers and Datasets on Instruction Tuning and Following. ✨✨✨
ACL'2023: DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models
Implementation of "TransPolymer: a Transformer-based language model for polymer property predictions" in PyTorch
YAYI 2 是中科闻歌研发的新一代开源大语言模型,采用了超过 2 万亿 Tokens 的高质量、多语言语料进行预训练。(Repo for YaYi 2 Chinese LLMs)
BERT4ETH: A Pre-trained Transformer for Ethereum Fraud Detection (WWW23)
[KDD22] Official PyTorch implementation for "Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning".
[NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding
Translate Natural Language Processing to SPARQL Query and vice versa
[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
[WWW 2022] Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations
On Transferability of Prompt Tuning for Natural Language Processing
CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks
Add a description, image, and links to the pretrained-language-model topic page so that developers can more easily learn about it.
To associate your repository with the pretrained-language-model topic, visit your repo's landing page and select "manage topics."