Skip to content
PoC of distributed compute platform using Rust, Apache Arrow, and Kubernetes!
Rust Scala Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
benchmarks/spark-benchmarks Update README for mounting csv files into spark containers (#51) Jul 25, 2019
bin Remove hard-coded data types Jul 6, 2019
docker Use Alpine Linux for server image; add tini (#56) Jul 31, 2019
docs save Jul 7, 2019
examples/nyctaxi Use a StatefulSet for executor and Kubernetes API instead of kubectl (#… Aug 10, 2019
proto/ballista return csv results Jul 14, 2019
src Use a StatefulSet for executor and Kubernetes API instead of kubectl (#… Aug 10, 2019
testdata Parallel query execution (WIP) (#65) Aug 6, 2019
tests edition idioms & bump toolchain (#46) Jul 24, 2019
.dockerignore fix example docker image Jul 7, 2019
.gitignore Parallel query execution (WIP) (#65) Aug 6, 2019
.travis.yml minor update to CONTRIBUTING Jul 21, 2019
Cargo.fake aggregate the aggregates (#13) Jul 16, 2019
Cargo.toml Use a StatefulSet for executor and Kubernetes API instead of kubectl (#… Aug 10, 2019
LICENSE add license Jul 7, 2019 another minor README update Jul 21, 2019 refactored dockerfiles to make caches work despite source changes (#17) Jul 17, 2019 save Jul 4, 2019
rust-toolchain edition idioms & bump toolchain (#46) Jul 24, 2019


License Version Build Status Gitter Chat

Ballista is a proof-of-concept distributed compute platform based on Kubernetes and the Rust implementation of Apache Arrow.

This is not my first attempt at building something like this. I originally wanted DataFusion to be a distributed compute platform but this was overly ambitious at the time, and it ended up becoming an in-memory single-threaded query execution engine for the Rust implementation of Apache Arrow. However, DataFusion now provides a good foundation to have another attempt at building a modern distributed compute platform in Rust.

My goal is to use this repo to move fast and try out ideas that help drive requirements for Apache Arrow and DataFusion.


This demo shows a Ballista cluster being created in Minikube and then shows the nyctaxi example being executed, causing a distributed query to run in the cluster, with each executor pod performing an aggregate query on one partition of the data, and then the driver merges the results and runs a secondary aggregate query to get the final result.


Here are the commands being run, with some explanation:

# create a cluster with 12 executors
cargo run --bin ballista -- create-cluster --name nyctaxi --num-executors 12 --template examples/nyctaxi/templates/executor.yaml

# check status
kubectl get pods

# run the nyctaxi example application, that executes queries using the executors
cargo run --bin ballista -- run --name nyctaxi --template examples/nyctaxi/templates/application.yaml

# check status again to find the name of the application pod
kubectl get pods

# watch progress of the application
kubectl logs -f ballista-nyctaxi-app-n5kxl

PoC Status

  • README describing project
  • Define service and minimal query plan in protobuf file
  • Generate code from protobuf file
  • Implement skeleton gRPC server
  • Implement skeleton gRPC client
  • Client can send query plan
  • Server can receive query plan
  • Server can translate protobuf query plan to DataFusion query plan
  • Server can execute query plan using DataFusion
  • Create Dockerfile for server
  • Ballista CLI - create cluster
  • Ballista CLI - delete cluster
  • Ballista CLI - run application
  • Simple example application works end to end
  • Add support for aggregate queries
  • Server can return Arrow data back to the application (in CSV format for now)
  • Example application can aggregate the aggregate results from each partition/node
  • Write blog post and announce Ballista

v1.0.0 Plan

  • Distributed query planner
  • Implement support for all DataFusion logical plan and expressions
  • Server can write results to CSV files
  • Server can write results to Parquet files
  • Implement Flight protocol
  • Support user code as part of distributed query execution
  • Interactive SQL support
  • Java bindings (supporting Java, Kotlin, Scala)


See for information on contributing to this project.

You can’t perform that action at this time.