Skip to content
#

pretrained-language-models

Here are 14 public repositories matching this topic...

This study focuses on political sentiment analysis during Bangladeshi elections, using the "Motamot" dataset to evaluate how Pre-trained Language Models (PLMs) and Large Language Models (LLMs) capture complex sentiment characteristics. The research explores the effectiveness of various models and learning strategies in understanding public opinion.

  • Updated Aug 10, 2024
  • Jupyter Notebook

Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT, and Bio_ClinicalBERT. Achieved F1 scores of 0.73 and 0.77 for BIOES and BIO tagging models, respectively.

  • Updated Jun 19, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the pretrained-language-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pretrained-language-models topic, visit your repo's landing page and select "manage topics."

Learn more