We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
-
Updated
Jul 26, 2021 - Jupyter Notebook
We explored recent studies in Question Answering System. Then tried out 3 different QA models(BERT and DistilBERT) for the sake of learning.
A Deep Learning Based Voice Analytics toolkit
Sentiment Analysis of movie reviews
Multiclass classification on tweets about the coronavirus
Positive/negative sentiment model on cleaned text data using Distilbert NLP pre-trained model from Hugging Face
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
Fine tune bert on a question answering dataset that is further finetuned on finance data to answer questions posed by senior leadership
The official repository for the PSYCHIC model
Distilbert and LSTM can identify hate speech in various text sequences. In our project, we combined datasets to evaluate their performance on validation set and achieved 93% accuracy with Distilbert and 94% with LSTM.
This repository contains my work on the prevention and anonymization of dox content on Twitter. It contains python code and demo of the proposed solution.
This project involves analyzing and classifying the BoolQ dataset from the SuperGLUE benchmark. We implemented various classifiers and techniques, including rules-based logic, BERT, RNN, and GPT-3/4 data augmentation, achieving performance improvements.
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
This paper describes Humor Analysis using Ensembles of Simple Transformers, the winning submission at the Humor Analysis based on Human Annotation (HAHA) task at IberLEF 2021.
Deep learning for Natural Language Processing
Advanced NLP with Contextual Question Answering: This notebook extracts, cleans, and processes text data from multiple files. It utilizes transformer models for contextual question answering and sentence generation. Perfect for exploring cutting-edge NLP techniques and comparing transformer model performances.
This project is designed to streamline the recruitment process by providing a job and resume matching system and a chatbot for applicants. The key functionalities include: Job and Resume Matching and LLM powered chatbot
LLM transformer Classifier with DistilBERT model
Add a description, image, and links to the distilbert-model topic page so that developers can more easily learn about it.
To associate your repository with the distilbert-model topic, visit your repo's landing page and select "manage topics."