Skip to content

Classification of Offensive tweets, part of OffensEval 2019 Competition.

Notifications You must be signed in to change notification settings

strategist922/Offensive-language-detection

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

Offensive_language_detection

This project was completed part of the NLP module at Imperial College. The goal was to propose an original approach to solve this problem. We proposed a deep learning approach that uses transfer learning in order to adress the data scarcity problem. We used both unsupervised transfer learning (used pre-trained embeddings) and sequential transfer learning (tasks learned sequentially). A paper has been released where we discuss the details of our implementation.

Creators: Batuhan Güler - Alexis Laignelet - Nicolo Frisiani

Description of the project

The project is the OffensEval 2019 competition from the Codalab plateform. The full description of the project is accessible here.

Offensive language is pervasive in social media. Individuals frequently take advantage of the perceived anonymity of computer-mediated communication, using this to engage in behavior that many of them would not consider in real life. Online communities, social media platforms, and technology companies have been investing heavily in ways to cope with offensive language to prevent abusive behavior in social media.

In OffensEval we break down offensive content into three sub-tasks taking the type and target of offenses into account.

Sub-tasks

  • Sub-task A - Offensive language identification
  • Sub-task B - Automatic categorization of offense types
  • Sub-task C - Offense target identification

Paper

A paper that contains the details regarding our submission to the OffensEval 2019 (SemEval 2019 - Task 6) has been released. The competition was based on the Offensive Language Identification Dataset. We first discuss the details of the classifier implemented and the type of input data used and pre-processing performed. We then move onto critically evaluating our performance. We have achieved a macro-average F1-score of 0.76, 0.68, 0.54, respectively for Sub-task A, Sub-task B, and Sub-task C, which we believe reflects on the level of sophistication of the models implemented. Finally, we will be discussing the difficulties encountered and possible improvements for the future.

About

Classification of Offensive tweets, part of OffensEval 2019 Competition.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%