The classifier used in the http://foodborne.smartchicagoapps.org/ application.
The prediction is done via a webservice running on an EC2 instance running ubuntu, R, apache, and the FastRWeb package to pull it all together.
How it's Served
The FastRWeb package takes a .R file (fp_classifier.R in my case) with a "run" function and makes it available via the web. I use this to submit text and return a classification... http://184.108.40.206/cgi-bin/R/fp_classifier?text=I%20ate%20some%20bad%20food%20at%20lunch%20and%20think%20I%20have%20food%20poisoning
I've tried several packages. RTextTools is a great resource, but a very simple n-gram based system TextCat is sufficient for this project. It reads in a pre-trained model file on startup, and calculates a predicted classification based on the text input from the user.
First, I download the data from our mongodb where it's stored. Then I manually classify the data to train the model. Then save the trained model to a file for use on the server.
Get the Data
First, I download the tweets from mongodb using RMongo package. Save these tweets to a csv file for the manual classification step.
I couldn't concentrate well staring at a wall of text, so I found a nugget of code online that allowed me to run a script from the command line and input a single character and not have to hit enter. Basically, I can just keep my fingers on 1 and 0 and rapid fire the manual part of this.
The first part is to pre-process the data before training the model. In this case, I take out special characters and punctuation. I also replace all links with "urlextracted". The idea behind this is that nearly all tweets with links in them are "junk", that is, news articles about food poisoning, not specific instances of an illness. So, by treating all of those links the same, the model should associate "urlextracted" with junk fairly well. Then save the trained model as a file.
Copyright (c) 2013 Cory Nissen. Released under the MIT License.