Analyzing Twitter Data Using CDH
Install Cloudera Manager 4.8 and CDH4
Before you get started with the actual application, you'll first need CDH4 installed. Specifically, you'll need Hadoop, Flume, Oozie, and Hive. The easiest way to get the core components is to use Cloudera Manager to set up your initial environment. You can download Cloudera Manager from the Cloudera website, or install CDH manually.
If you go the Cloudera Manager route, you'll still need to install Flume manually.
MySQL is the recommended database for the Oozie database and the Hive metastore. Click here for installation documentation.
Configuring Flume (Cloudera Manager path)
Build or Download the custom Flume Source
A pre-built version of the custom Flume Source is available here.
flume-sourcesdirectory contains a Maven project with a custom Flume source designed to connect to the Twitter Streaming API and ingest tweets in a raw JSON format into HDFS.
To build the flume-sources JAR, from the root of the git repository:
$ cd flume-sources $ mvn package $ cd ..
This will generate a file called
Add the JAR to the Flume classpath
/usr/lib/flume-ng/plugins.d/twitter-streaming/lib/flume-sources-1.0-SNAPSHOT.jarand also to
/var/lib/flume-ng/plugins.d/twitter-streaming/lib/flume-sources-1.0-SNAPSHOT.jar, just to be sure (actually, refer to Plugin Directories in Cloudera manager->flume->configuration->Agent(Default)). If those places don't exist,
Configure Flume agent in Cloudera Manager Web UI flume
Go to the Flume Service page (by selecting Flume service from the Services menu or from the All Services page).
Pull down the
Configurationtab, and select
View and Edit.
Select the Agent (Default) in the left hand column.
Set the Agent Name property to
TwitterAgentwhose configuration is defined in flume.conf.
Copy the contents of flume.conf file, in its entirety, into the Configuration File field. -- If you wish to edit the keywords and add Twitter API related data, now might be the right time to do it.
Setting up Hive
Build or Download the JSON SerDe
A pre-built version of the JSON SerDe is available here.
hive-serdesdirectory contains a Maven project with a JSON SerDe which enables Hive to query raw JSON data.
To build the hive-serdes JAR, from the root of the git repository:
$ cd hive-serdes $ mvn package $ cd ..
This will generate a file called
Create the Hive directory hierarchy
$ sudo -u hdfs hadoop fs -mkdir /user/hive/warehouse $ sudo -u hdfs hadoop fs -chown -R hive:hive /user/hive $ sudo -u hdfs hadoop fs -chmod 750 /user/hive $ sudo -u hdfs hadoop fs -chmod 770 /user/hive/warehouse
You'll also want to add whatever user you plan on executing Hive scripts with to the hive Unix group:
$ sudo usermod -a -G hive <username>
Configure the Hive metastore
The Hive metastore should be configured to use MySQL. Follow these instructions to configure the metastore. Make sure to install the MySQL JDBC driver in
Create the tweets table
hive, and execute the following commands:
ADD JAR <path-to-hive-serdes-jar>; CREATE EXTERNAL TABLE tweets ( id BIGINT, created_at STRING, source STRING, favorited BOOLEAN, retweeted_status STRUCT< text:STRING, user:STRUCT<screen_name:STRING,name:STRING>, retweet_count:INT>, entities STRUCT< urls:ARRAY<STRUCT<expanded_url:STRING>>, user_mentions:ARRAY<STRUCT<screen_name:STRING,name:STRING>>, hashtags:ARRAY<STRUCT<text:STRING>>>, text STRING, user STRUCT< screen_name:STRING, name:STRING, friends_count:INT, followers_count:INT, statuses_count:INT, verified:BOOLEAN, utc_offset:INT, time_zone:STRING>, in_reply_to_screen_name STRING ) PARTITIONED BY (datehour INT) ROW FORMAT SERDE 'com.cloudera.hive.serde.JSONSerDe' LOCATION '/user/flume/tweets';
The table can be modified to include other columns from the Twitter data, but they must have the same name, and structure as the JSON fields referenced in the Twitter documentation.
Prepare the Oozie workflow
Configure Oozie to use MySQL
If using Cloudera Manager, Oozie can be reconfigured to use MySQL via the service configuration page on the Databases tab. Make sure to restart the Oozie service after reconfiguring. You will need to install the MySQL JDBC driver in
If Oozie was installed manually, Cloudera provides instructions for configuring Oozie to use MySQL.
Create a lib directory and copy any necessary external JARs into it
External JARs are provided to Oozie through a
libdirectory in the workflow directory. The workflow will need a copy of the MySQL JDBC driver and the hive-serdes JAR.
$ mkdir oozie-workflows/lib $ cp hive-serdes/target/hive-serdes-1.0-SNAPSHOT.jar oozie-workflows/lib $ cp /var/lib/oozie/mysql-connector-java.jar oozie-workflows/lib
Copy hive-site.xml to the oozie-workflows directory
To execute the Hive action, Oozie needs a copy of
$ sudo cp /etc/hive/conf/hive-site.xml oozie-workflows $ sudo chown <username>:<username> oozie-workflows/hive-site.xml
Copy the oozie-workflows directory to HDFS
$ hadoop fs -put oozie-workflows /user/<username>/oozie-workflows
Install the Oozie ShareLib in HDFS
$ sudo -u hdfs hadoop fs -mkdir /user/oozie $ sudo -u hdfs hadoop fs -chown oozie:oozie /user/oozie
In order to use the Hive action, the Oozie ShareLib must be installed. Installation instructions can be found here.
Starting the data pipeline
Start the Flume agent
Create the HDFS directory hierarchy for the Flume sink. Make sure that it will be accessible by the user running the Oozie workflow.
$ hadoop fs -mkdir /user/flume/tweets $ hadoop fs -chown -R flume:flume /user/flume $ hadoop fs -chmod -R 770 /user/flume $ sudo /etc/init.d/flume-ng-agent start
If using Cloudera Manager, start Flume agent from Cloudera Manager Web UI.
Adjust the start time of the Oozie coordinator workflow in job.properties
You will need to modify the
job.propertiesfile, and change the
initialDatasetparameters. The start and end times are in UTC, because the version of Oozie packaged in CDH4 does not yet support custom timezones for workflows. The initial dataset should be set to something before the actual start time of your job in your local time zone. Additionally, the
tzOffsetparameter should be set to the difference between the server's timezone and UTC. By default, it is set to -8, which is correct for US Pacific Time.
Start the Oozie coordinator workflow
$ oozie job -oozie http://<oozie-host>:11000/oozie -config oozie-workflows/job.properties -run