Skip to content

A classifier that determines if text is toxic, severely toxic, insulting, obscene, threatening, and/or identity-hating. Created for AI4ALL Research Fellowship.

Notifications You must be signed in to change notification settings

ENSCMA2/humanly

Repository files navigation

Humanly

Humanly is a linear SVM multi-label classifier that detects whether text is toxic, severely toxic, obscene, threatening, insulting, and/or identity-hating. Created during the inaugural AI4ALL Research Fellowship from February to April 2018. Mentored by Priya Vijayarajendran, CTO of Applied AI at IBM.

My experiments and results can be found at https://medium.com/ai4allorg/making-the-internet-a-safer-place-with-ai-f97cf46b3f16.

Requirements

Python 2.7+ or 3.4+. Packages: NLTK, scikit-learn, NumPy, SciPy. To install these packages, run:

pip install numpy
pip install scipy
pip install sklearn
pip install nltk

Usage

Download the training data from https://drive.google.com/file/d/19BmCuhMdYAPafpWP6Ei0ZvlxDCXEUkat/view?usp=sharing and rename it as 'train.csv'.

python runner.py

About

A classifier that determines if text is toxic, severely toxic, insulting, obscene, threatening, and/or identity-hating. Created for AI4ALL Research Fellowship.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages