Skip to content

long1208/blaze

 
 

Repository files navigation

BLAZE

test test

The Blaze accelerator for Apache Spark leverages native vectorized execution to accelerate query processing. It combines the power of the Apache Arrow-DataFusion library and the scale of the Spark distributed computing framework.

Blaze takes a fully optimized physical plan from Spark, mapping it into DataFusion's execution plan, and performs native plan computation in Spark executors.

Blaze is composed of the following high-level components:

  • Blaze Spark Extension: hooks the whole accelerator into Spark execution lifetime.
  • Native Operators: defines how each SparkPlan maps to its native execution counterparts.
  • JNI Gateways: passing data and control through JNI boundaries.
  • Plan SerDe: serialization and deserialization of DataFusion plan with protobuf.
  • Columnarized Shuffle: shuffle data file organized in Arrow-IPC format.

Based on the inherent well-defined extensibility of DataFusion, Blaze can be easily extended to support:

  • Various object stores.
  • Operators.
  • Simple and Aggregate functions.
  • File formats.

We encourage you to extend DataFusion capability directly and add the supports in Blaze with simple modifications in plan-serde and extension translation.

Build from source

To build Blaze, please follow the steps below:

  1. Install Rust

The underlying native execution lib, DataFusion, is written in Rust Lang. So you're required to install Rust first for compilation. We recommend you to use rustup.

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  1. Check out the source code.
git clone git@github.com:blaze-init/blaze.git
cd blaze
  1. Build the project.

You could either build Blaze in debug mode for testing purposes or in release mode to unlock the full potential of Blaze.

./gradlew -Pmode=[debug|release] build

After the build is finished, a fat Jar package that contains all the dependencies will be generated in the target directory.

Run Spark Job with Blaze Accelerator

This section describes how to submit and configure a Spark Job with Blaze support.

You could enable Blaze accelerator through:

$SPARK_HOME/bin/spark-[sql|submit] \
  --jars "/path/to/blaze-engine-1.0-SNAPSHOT.jar" \
  --conf spark.sql.extensions=org.apache.spark.sql.blaze.BlazeSparkSessionExtension \
  --conf spark.shuffle.manager=org.apache.spark.sql.blaze.execution.ArrowShuffleManager301 \
  --conf spark.executor.extraClassPath="./blaze-engine-1.0-SNAPSHOT.jar" \
  .... # your original arguments goes here

At the same time, there are a series of configurations that you can use to control Blaze with more granularity.

Parameter Default value Description
spark.executor.memoryOverhead executor.memory * 0.1 The amount of non-heap memory to be allocated per executor. Blaze would use this part of memory.
spark.blaze.memoryFraction 0.75 A fraction of the off-heap that Blaze could use during execution.
spark.blaze.batchSize 16384 Batch size for vectorized execution.
spark.blaze.enable.shuffle true If enabled, use native, Arrow-IPC based Shuffle.
spark.blaze.enable.[scan,project,filter,sort,union,sortmergejoin] true If enabled, offload the corresponding operator to native engine.

Performance

We periodically benchmark Blaze locally with a 1 TB TPC-DS Dataset to show our latest results and prevent unnoticed performance regressions. Check Benchmark Results with the latest date for the performance comparison with vanilla Spark.

Currently, you can expect up to a 2x performance boost, cutting resource consumption to 1/5 within several keystrokes. Stay tuned and join us for more upcoming thrilling numbers.

20220522-memcost

We also encourage you to benchmark Blaze locally and share the results with us. 🤗

Roadmap

1. Operators

Currently, there are still several operators that we cannot execute natively:

2. Compressed Shuffle

We use segmented Arrow-IPC files to express shuffle data. If we could apply IPC compression, we would benefit more from Shuffle since columnar data would have a better compression ratio. Tracked in #4.

3. UDF support

We would like to have a high-performance JVM-UDF invocation framework that could utilize a great variety of the existing UDFs written in Spark/Hive language. They are not supported natively in Blaze at the moment.

Community

We're using Discussions to connect with other members of our community. We hope that you:

  • Ask questions you're wondering about.
  • Share ideas.
  • Engage with other community members.
  • Welcome others and are open-minded. Remember that this is a community we build together 💪 .

License

Blaze is licensed under the Apache 2.0 License. A copy of the license can be found here.

About

Blazing-fast query execution engine speaks Apache Spark language and has Arrow-DataFusion at its core.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 43.4%
  • Scala 37.5%
  • Java 16.5%
  • Shell 2.4%
  • Other 0.2%