Skip to content

kaisjessa/BlissfulBrowsing

Repository files navigation

Inspiration

The online world is chaotic, a turbulent marketplace of the world’s real time thoughts. The Web permits an unprecedented level of raw, unfiltered communication--all anonymous. Humanity lets loose, and with all of the good, there can also be bad. This toxicity is simply accepted by the Internet as the cost of access. However, we at BlissfulBrowsing firmly believe that online toxicity is a damaging and unreported problem, affecting mental health. But machine learning and techniques like sentiment analysis can help.

What it does

BlissfulBrowsing is a Google Chrome browser extension that removes toxic and damaging statements, comments or posts from web pages. Toxicity encapsulates everything from negative statements, threats, profanity, and other harmful statements that have the potential to negatively impact your mental health. The extension parses through the entire webpage and detects toxic language through sentiment analysis and by linking to the TensorFlow model. All toxic material is then removed by the extension, leaving behind a mental health-friendly webpage. With the removal of online toxicity, BlissfulBrowsing aims to keep your browsing experience positive and protects your mental health from the negativity that is unfortunately all too common online.

How we built it

The Chrome extension was written in Javascript. The group decided to leverage a TensorFlow.js pretrained model that determines the relative toxicity of a phrase, expressed as a confidence value. The extension works by parsing through the current HTML page and parsing it into separate phrases. The split phrases are then sent through the TensorFlow.js model which then returns the toxicity of each phrase. If the toxicity confidence surpasses a predetermined threshold, the phrase is determined to be toxic and the extension filters out the toxic phrase.

Accomplishments we're proud of

Despite all team members having previous coding experience with various languages such as Python, C, Javascript and more, this was the first time anyone had undertaken the development of a Google Chrome browser extension. We decided not to train the language sentiment model from scratch, and decided to leverage a pretrained model instead. The implementation of the TensorFlow model in the extension on its own was a challenge, though eventually the team was able to bring it together, culminating in a fully-functioning BlissfulBrowsing extension. We are extremely proud of our project, which is intended to positively impact the mental health of all online users, and heal a toxic Internet.

Future steps

Although the existing model is accurate in filtering out toxic messages, it requires a significant amount of time to pass all the phrases in a web page through the model. As a result, the web pages have a slight delay in loading time before the toxic messages are removed. Our first step towards the project after the hackathon would be to address this issue and increase the speed of the extension. For additional improvements, the extension would add functionality for filtering toxicity in languages other than English by training our own models and integrating them into the extension.

About

Chrome extension that filters out toxicity from webpages

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published