Samples for users of the Yelp Academic Dataset
Clone or download
sc932 Merge pull request #14 from Yelp/sclark_json_to_csv_less_mem
Made json to csv converter more memory efficient
Latest commit 624534d Nov 8, 2014

Build Status

Yelp's Academic Dataset Examples

We're providing three examples for use with the datasets available at and They all depend on mrjob and python 2.6 or later.

To install all dependencies: $ pip install -e .

To test: $ tox


json_to_csv_converter: Convert the dataset from json format to csv format.

$ python yelp_academic_dataset.json # Creates yelp_academic_dataset.csv

category_predictor: Given some text, predict likely categories. For example:

$ python category_predictor/ yelp_academic_dataset.json > category_predictor.json
$ python category_predictor/ category_predictor.json "bacon donut"
Category: "Food" - 82.66% chance
Category: "Restaurants" - 16.99% chance
Category: "Donuts" - 0.12% chance
Category: "Basque" - 0.02% chance
Category: "Spanish" - 0.02% chance

review_autopilot: Use a markov chain to finish a review. For example:

$ python review_autopilot/ Food 'They have the best'
They have the best coffee is good food was delicious cookies and
a few friends i think they make this

positive_category_words: See the Yelp engineering blog for details about this example. In short, it generates positivity scores for words either globally or per-category.

Basic set-up

You can use any of mjrob's runner with these examples, but we'll focus on the local and emr runner (if you have access to your own hadoop cluster, check out the mrjob docs for instructions on how to set this up).

Local mode couldn't be easier:

# this step will take a VERY long time
python review_autopilot/ yelp_academic_dataset.json > autopilot.json

# this should be instant
python review_autopilot/ Food 'They have the best'
> hot dogs ever

Waiting a long time is kind of lame, no? Let's try the same thing using EMR.

First off, you'll need an aws_access_key and an aws_secret_access_key. You can get these from the AWS console (you'll need to sign up for an AWS developer account and enable s3 / emr usage, if you haven't already).

Create a simple mrjob.conf file, like this:

    aws_access_key_id: YOUR_ACCESS_KEY
    aws_secret_access_key: YOUR_SECRET_KEY

Now that that's done, you can run the autopilot script on EMR.

# WARNING: this will cost you roughly $2 and take 10-20 minutes
python review_autopilot/ --num-ec2-instances 10 --ec2-instance-type c1.medium -v --runner emr yelp_academic_dataset.json

You can save money (and time) by re-using jobflows and uploading the dataset to a personal, private s3 bucket - check out the mrjob docs for instructions on doing this.