📄 Please refer to README.pdf for detailed explanation
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- R-Drop: Regularized Dropout for Neural Networks
- EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
- An Improved Baseline for Sentence-level Relation Extraction
- Enriching Pre-trained Language Model with Entity Information for Relation Classification
- Unified Pre-training for Program Understanding and Generation
- CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
- CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
- CodeBERT: A Pre-Trained Model for Programming and Natural Languages
- GRAPHCODEBERT: PRE-TRAINING CODE REPRESENTATIONS WITH DATA FLOW