Skip to content

Kaggle competition : Detect toxicity across a diverse range of conversations

Notifications You must be signed in to change notification settings

ysab/jigsaw_toxicity_classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

jigsaw_toxicity_classification

Kaggle competition : Detect toxicity across a diverse range of conversations

In this competition, you're challenged to build a model that recognizes toxicity and minimizes this type of unintended bias with respect to mentions of identities. You'll be using a dataset labeled for identity mentions and optimizing a metric designed to measure unintended bias. Develop strategies to reduce unintended bias in machine learning models, and you'll help the Conversation AI team, and the entire industry, build models that work well for a wide range of conversations.

About

Kaggle competition : Detect toxicity across a diverse range of conversations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published