Project Brief

shawnacscott edited this page Dec 11, 2013 · 2 revisions

Summary:

Online harassment, bullying, and threats are more common every day, and very few laws, tools, or policies exist to curb the growing problem these hate messages present. The detrimental impact on recipients of engaging with hateful and threatening messages is well documented, especially in aggregate. The goal of this project is to give those who experience online harassment and threats a way to catalog and report threats without having to personally engage with threatening speech. This app will accomplish that by allowing a pool of allies to moderate and catalog these messages on behalf of the user.

User pool:

The first iteration of this project will target public speakers, social justice activists, and other highly visible people who receive a high volume of online harassment through email. This group will likely be comprised largely of young adults with an established online presence in the United States who primarily access the internet through laptops and mobile phones.

Aesthetic overview:

The site should evoke a calming, soothing feeling in its layout and design. It should be extremely clear and simple for user's to achieve their goals. While the eventual goal is to have this app be highly configurable, it should maintain a minimalistic aesthetic without hiding things from the user. Transparency and clear communication are key. The site will initially be promoted through social media and direct outreach to people experiencing online harassment. The project will be called "Message Mod," and the corresponding domain name has been acquired.

Technical overview:

This project will have several technical challenges. Because of the nature of this app as a method to mitigate interaction with online harassment, it will likely be the target of malicious attacks. As a result, it is imperative that the site use encryption, authentication, and the best available techniques for maintaining privacy and mitigating security risks. The site will need to both send and receive emails and alert messages, through SMS and eventually other avenues. Eventually it should filter email before it reaches a user and redirect it to the app for moderation. It will also need to allow users to authorize allies for their pool of moderators, and assign roles to each of these users. This means Devise, Can Can, and Rolify will be essential tools.

Testing:

Each controller will have an integration test, and the goal for overall test coverage is 50% or higher.

Project management:

Our team consists of Shawna C. Scott and Timothy Winters. We will each take the lead on different features, using pair programming and peer review to flesh out those features.

Roadmap:

Our minimum viable product will allow users to sign up, forward email to the app for moderation, authorize moderators, and access a list of all moderated messages and the identifying information associated with it. It will allow moderators to sign up, rate the threat level of emails (which will populate a log accessible to the user), and alert users through SMS of threats that are immediately actionable (such as a bomb threat that includes the user's home address).

Later iterations will automatically filter out incoming harassing emails and send them directly to the moderation pool, allow users to access analytics of their message volume over time, and integrate social media streams for moderation as well.

Schedule:

  • 12/04 - Nail down technical requirements for MVP
  • 12/18 - Have a working alpha of MVP; views and content should be generally sketched in
  • 01/01 - Have a working beta of MVP with decent views and content, as well as 2-3 additional features in alpha (Priority: automatically filter incoming messages, allow user to choose to report threats, create weekly logs, integrate Twitter mention stream)
  • 01/08 - Complete project to point it is demoable with all features up to beta level with finalized views and content
  • 01/15 - Final demo day presentation