The bigdata micro benchmark suite
- Current version: 4.0
- Release date: 2015-4-30
- Contact: Lv Qi, Grace Huang, Jiangang Duan
- Getting Started
- Advanced Configuration
- Possible Issues
This benchmark suite contains 10 typical micro workloads. This benchmark suite also has options for users to enable input/output compression for most workloads with default compression codec (zlib). Some initial work based on this benchmark suite please refer to the included ICDE workshop paper (i.e., WISS10_conf_full_011.pdf).
- Since HiBench-2.2, the input data of benchmarks are all automatically generated by their corresponding prepare scripts.
- Since HiBench-3.0, it introduces Yarn support
Since HiBench-4.0, it consists of more workload implementations on both Hadoop MR and Spark. For Spark, three different APIs including Scala, Java, Python are supportive.
This workload sorts its text input data, which is generated using RandomTextWriter.
This workload counts the occurrence of each word in the input data, which are generated using RandomTextWriter. It is representative of another typical class of real world MapReduce jobs - extracting a small amount of interesting data from large data set.
TeraSort is a standard benchmark created by Jim Gray. Its input data is generated by Hadoop TeraGen example program.
This workload sleep an amount of seconds in each task to test framework scheduler.
Scan (scan), Join(join), Aggregate(aggregation)
This workload is developed based on SIGMOD 09 paper "A Comparison of Approaches to Large-Scale Data Analysis" and HIVE-396. It contains Hive queries (Aggregation and Join) performing the typical OLAP queries described in the paper. Its input is also automatically generated Web data with hyperlinks following the Zipfian distribution.
Web Search Benchmarks:
This workload benchmarks PageRank algorithm implemented in Spark-MLLib/Hadoop (a search engine ranking benchmark included in pegasus 2.0) examples. The data source is generated from Web data whose hyperlinks follow the Zipfian distribution.
Nutch indexing (nutchindexing)
Large-scale search indexing is one of the most significant uses of MapReduce. This workload tests the indexing sub-system in Nutch, a popular open source (Apache project) search engine. The workload uses the automatically generated Web data whose hyperlinks and words both follow the Zipfian distribution with corresponding parameters. The dict used to generate the Web page texts is the default linux dict file /usr/share/dict/linux.words.
Bayesian Classification (bayes)
This workload benchmarks NaiveBayesian Classification implemented in Spark-MLLib/Mahout examples.
Large-scale machine learning is another important use of MapReduce. This workload tests the Naive Bayesian (a popular classification algorithm for knowledge discovery and data mining) trainer in Mahout 0.7, which is an open source (Apache project) machine learning library. The workload uses the automatically generated documents whose words follow the zipfian distribution. The dict used for text generation is also from the default linux file /usr/share/dict/linux.words.
K-means clustering (kmeans)
This workload tests the K-means (a well-known clustering algorithm for knowledge discovery and data mining) clustering in Mahout 0.7/Spark-MLlib. The input data set is generated by GenKMeansDataset based on Uniform Distribution and Guassian Distribution.
enhanced DFSIO (dfsioe)
Enhanced DFSIO tests the HDFS throughput of the Hadoop cluster by generating a large number of tasks performing writes and reads simultaneously. It measures the average I/O rate of each map task, the average throughput of each map task, and the aggregated throughput of HDFS cluster. Note: this benchmark doesn't have Spark corresponding implementation.
Supported hadoop/spark release:
- Apache release of Hadoop 1.x and Hadoop 2.x
- CDH4/CDH5 release of MR1 and MR2.
- Spark1.3 Note : No version of CDH supports SparkSQL. Please download SparkSQL from Apache-spark official release page if you are using it.
Setup JDK, Hadoop-YARN, Spark runtime environment properly.
Download/checkout HiBench benchmark suite
<HiBench_Root>/bin/build-all.shto build HiBench.
Note: Begin from HiBench V4.0, HiBench will need python 2.x(>=2.6) .
For minimum requirements: create & edit
cd conf cp 99-user_defined_properties.conf.template 99-user_defined_properties.conf
And Make sure below properties has been set:
hibench.hadoop.home The Hadoop installation location hibench.spark.home The Spark installation location hibench.hdfs.master HDFS master hibench.spark.master SPARK master
Note: For YARN mode, set
yarn-clusteris not supported yet)
<HiBench_Root>/bin/run-all.shto run all workloads with all language APIs with
View the report:
<HiBench_Root>/reportto check for the final report:
report/hibench.report: Overall report about all workloads.
report/<workload>/<language APIs>/bench.log: Raw logs on client side.
report/<workload>/<language APIs>/monitor.html: System utilization monitor results.
report/<workload>/<language APIs>/conf/<workload>.conf: Generated environment variable configurations for this workload.
report/<workload>/<language APIs>/conf/sparkbench/<workload>/sparkbench.conf: Generated configuration for this workloads, which is used for mapping to environment variable.
report/<workload>/<language APIs>/conf/sparkbench/<workload>/spark.conf: Generated configuration for spark.
<HiBench root>/bin/report_gen_plot.py report/hibench.reportto generate report figures.
Parallelism, memory, executor number tuning:
hibench.default.map.parallelism Mapper numbers in MR, partition numbers in Spark hibench.default.shuffle.parallelism Reducer numbers in MR, shuffle partition numbers in Spark hibench.yarn.executors.num Number executors in YARN mode hibench.yarn.executors.cores Number executor cores in YARN mode spark.executors.memory Executor memory, standalone or YARN mode spark.driver.memory Driver memory, standalone or YARN mode
spark.*properties will be passed to Spark runtime configuration.
hibench.compress.profile Compression option `enable` or `disable` hibench.compress.codec.profile Compression codec, `snappy`, `lzo` or `default`
Data scale profile selection:
hibench.scale.profile Data scale profile, `tiny`, `small`, `large`, `huge`, `gigantic`, `bigdata`
You can add more data scale profiles in
conf/10-data-scale-profile.conf. And please don't change
conf/00-default-properties.confif you have no confidence.
Configure for each workload or each language API:
All configurations will be loaded in a nested folder structure:
conf/*.conf Configure globally workloads/<workload>/conf/*.conf Configure for each workload workloads/<workload>/<language APIs>/.../*.conf Configure for various languages
For configurations in same folder, the loading sequence will be sorted according to configure file name.
Values in latter configure will override former.
The final values for all properties will be stored in a single config file located at
report/<workload><language APIs>/conf/<workload>.conf, which contain all values and pinpoint the source of the configures.
Configure for future Spark release
bin/build-all.shwill build HiBench for all running environments:
- MR1, Spark1.2 - MR1, Spark1.3 - MR2, Spark1.2 - MR2, Spark1.3
And HiBench will probe Hadoop & Spark release version and choose proper HiBench release automatically. However, for furture Spark release (for example, Spark1.4) which is API compatibled with Spark1.3. HiBench'll fail due to lack the profile. You can define Hadoop/Spark release version by setting to force HiBench using Spark1.3 profile:
Configures for running workloads and language APIs:
conf/benchmarks.lstfile under the package folder defines the workloads to run when you execute the
bin/run-all.shscript under the package folder. Each line in the list file specifies one workload. You can use
#at the beginning of each line to skip the corresponding bench if necessary.
You can also run each workload separately. In general, there are 3 different files under one workload folder.
prepare/prepare.sh Generate input data in HDFS for running the benchmark mapreduce/bin/run.sh run MapReduce language API spark/java/bin/run.sh run Spark/java language API spark/scala/bin/run.sh run Spark/scala language API spark/python/bin/run.sh run Spark/python language API
Running Spark/Python API with YARN:
Running with CDH/MR1:
For a tarball deployed CDH/MR1, please recreate symlink file
hadoop-*-cdh*/share/hadoop/mapreduceto point to correct folder:
cd share/hadoop rm mapreduce ln -s mapreduce1 mapreduce
Running Spark/Python, MLLib related workloads:
You'll need to install numpy (version > 1.4) in master & all slave nodes.
yum install numpy
aptitude install python-numpy
You'll need to install python-matplotlib(version > 0.9).
yum install python-matplotlib
aptitude install python-matplotlib