Big Data or Graph framework execution on PBS Scheduler.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
dask Fix typo LD_LIBRARY_PAT -> LD_LIBRARY_PATH Apr 16, 2018
spark Readme minor updates Jan 7, 2018
.gitignore Test of properties file and history server. Update readme Jan 7, 2018
LICENSE.txt improved readme, add a license Jan 7, 2018 Readme minor updates Jan 7, 2018

Big Data or graph frameworks on PBS


This projects contains tooling and documentation for launching Spark (, Flink ( or Dask ( on a PBS ( cluster that has access to a shared file system.

The mechanisms are always the same :

  1. qsub a PBS job which takes sevral chunks using select option
  2. Use pbs-dsh command to start scheduler and workers on reserved chunks
  3. Either wait for qdel or submit a given application to the newly started cluster.

See how it works and how to use the provided tooling in each framework directory:

Quick example

These tools are made for being easy to use, once downloaded into your cluster, you will be able to start a Spark application as easy as that:

#PBS -N spark-cluster-path
#PBS -l select=9:ncpus=4:mem=20G
#PBS -l walltime=01:00:00

# Qsub template for CNES HAL
# Scheduler: PBS

export JAVA_HOME=/work/logiciels/rhall/jdk/1.8.0_112
export SPARK_HOME=/work/logiciels/rhall/spark/2.2.1

$PBS_O_WORKDIR/pbs-launch-spark -n 4 -m "18000M" $SPARK_HOME/examples/src/main/python/ $SPARK_HOME/conf/


This project has been tested on @CNES (Centre National d'Etude Spatial -- the French Space Agency) HPC cluster. Feel free to open an issue to ask for a correction or even for help, We will be glad to help.