Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
-
Updated
Jul 31, 2023 - Python
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
news-please - an integrated web crawler and information extractor for news that just works
RoBERTa中文预训练模型: RoBERTa for Chinese
CLUENER2020 中文细粒度命名实体识别 Fine Grained Named Entity Recognition
🏡 Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.
The implementation of DeBERTa
Official implementation of the papers "GECToR – Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Tagging" (BEA-21)
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/
A PyTorch implementation of a BiLSTM\BERT\Roberta(+CRF) model for Named Entity Recognition.
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive summarization datasets to the extractive task.
Add a description, image, and links to the roberta topic page so that developers can more easily learn about it.
To associate your repository with the roberta topic, visit your repo's landing page and select "manage topics."