Raw JSON pings are stored on S3 within files containing framed Heka records. Reading the raw data in through e.g. Spark can be slow as for a given analysis only a few fields are typically used; not to mention the cost of parsing the JSON blobs. Furthermore, Heka files might contain only a handful of records under certain circumstances.
Defining a derived Parquet dataset, which uses a columnar layout optimized for analytics workloads, can drastically improve the performance of analysis jobs while reducing the space requirements. A derived dataset might, and should, also perform heavy duty operations common to all analysis that are going to read from that dataset (e.g., parsing dates into normalized timestamps).
Adding a new derived dataset
See the views folder for examples of jobs that create derived datasets.
Development and Deployment
The general workflow for telemetry-batch-view is:
- Make some local changes on your branch
- Test locally in Airflow, testing just the jobs that your code change touches.
- Open PR, tag someone to review. Merge when approved, which will deploy the jar to production.
There are two possible workflows for hacking on telemetry-batch-view: you can either create a docker container for building the package and running tests, or import the project into IntelliJ's IDEA.
To run sbt tests inside Docker, run:
# This will take 30+ minutes to run. ./run-sbt.sh test
For more efficient iteration, just invoke
./run-sbt.sh without arguments to open up a shell and then test only the class you're working on without invoking sbt startup time on each iteration:
sbt> testOnly *AddonsViewTest
You may need to increase the amount of memory allocated to Docker for this to work, as some of the tests are very memory hungry at present. At least 4 gigabytes is recommended.
If you wish to import the project into IntelliJ IDEA, apply the following changes to
Languages & Frameworks ->
Scala Compile Server:
- JVM maximum heap size, MB:
- JVM parameters:
-server -Xmx2G -Xss4M
Note that the first time the project is opened it takes some time to download all the dependencies.
Scala style checker
Scalastyle is used on the CI for enforcing style rules. In order to run it locally, use:
sbt scalastyle test:scalastyle
See the documentation for specific views for details about running/generating them.
For example, to create a longitudinal view locally:
sbt "runMain com.mozilla.telemetry.views.LongitudinalView --from 20160101 --to 20160701 --bucket telemetry-test-bucket"
For distributed execution we pack all of the classes together into a single JAR and submit it to the cluster:
sbt assembly spark-submit --master yarn --deploy-mode client --class com.mozilla.telemetry.views.LongitudinalView target/scala-2.11/telemetry-batch-view-*.jar --from 20160101 --to 20160701 --bucket telemetry-test-bucket
If you run into memory issues during compilation time or running the test suite, issue the following command before running sbt:
export _JAVA_OPTIONS="-Xms4G -Xmx4G -Xss4M -XX:MaxMetaspaceSize=256M"
Running on Windows
Executing scala/Spark jobs could be particularly problematic on this platform. Here's a list of common issues and the relative solutions:
Issue: I see a weird reflection error or an odd exception when trying to run my code.
This is probably due to winutils being missing or not found. Winutils are needed by HADOOP and can be downloaded from here.
Issue: java.net.URISyntaxException: Relative path in absolute URI: ...
This means that winutils cannot be found or that Spark cannot find a valid warehouse directory. Add the following line at the beginning of your entry function to make it work:
System.setProperty("hadoop.home.dir", "C:\\path\\to\\winutils") System.setProperty("spark.sql.warehouse.dir", "file:///C:/somereal-dir/spark-warehouse")
Issue: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------
See SPARK-10528. Run "winutils chmod 777 /tmp/hive" from a privileged prompt to make it work.
Any commits to master should also trigger a circleci build that will do the sbt publishing for you to our local maven repo in s3.