Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MWT-26 #576

Draft
wants to merge 60 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
ef3c10d
Add visual separations for the script's sections.
mpereira Sep 11, 2019
70579ca
Make container name be dependent on test parameter file name.
mpereira Sep 11, 2019
9564118
Check package repos every time, create Marathon group.
mpereira Sep 11, 2019
287a9b9
Use container name for docker image tag and run directory.
mpereira Sep 12, 2019
04717bd
Automatically format some files with Black.
mpereira Oct 7, 2019
9fbca5b
Parameterize group file name, run commands in container.
mpereira Oct 7, 2019
0e856a2
Run commands in the container.
mpereira Oct 7, 2019
d9a7e96
Quote this.
mpereira Oct 7, 2019
585beac
Make deploy-dispatchers.py script support group roles.
mpereira Oct 7, 2019
c4d870f
Show CLUSTER_URL in pre-run report.
mpereira Oct 7, 2019
cd95b27
Add group_role support to streaming workload deploy script.
mpereira Oct 7, 2019
5c1c912
Make sure service name is prefixed with a slash.
mpereira Oct 7, 2019
f40351a
Add group_role support to the batch_test.py script.
mpereira Oct 7, 2019
0b25ba5
Add required roles and permissions required for group role enforcement.
mpereira Oct 7, 2019
0a2f2d1
Add spark-options.json file.
mpereira Oct 7, 2019
2bd9407
Typo.
mpereira Oct 8, 2019
73b5824
Add CLI parameter description.
mpereira Oct 9, 2019
bf0fcf0
Remove failing jobs stuff.
mpereira Oct 9, 2019
0025ef4
Fix shellcheck warning.
mpereira Oct 9, 2019
50b56ef
Variable renames, shellcheck fixes, DSEngine, total cpu/mem/gpu.
mpereira Oct 9, 2019
b849f5d
GROUP_NAME should be coming with no slash prefix.
mpereira Oct 9, 2019
2f1075b
Create quota if it doesn't exit.
mpereira Oct 9, 2019
222cbe2
Install recent DC/OS CLI.
mpereira Oct 10, 2019
464a17b
Fix quoting.
mpereira Oct 10, 2019
af9129e
Should be group name here.
mpereira Oct 10, 2019
c2daf5e
Make revoke_permissions() also take role list.
mpereira Oct 10, 2019
65b3fd9
Fix indentation.
mpereira Oct 10, 2019
bec6c9f
Make DC/OS CLI binary a parameter.
mpereira Oct 10, 2019
5b34f05
Don't break out of loop, just skip to the next element.
mpereira Oct 10, 2019
3cd62ec
This was breaking.
mpereira Oct 10, 2019
b9fa2ec
Add script to list service tasks.
mpereira Oct 10, 2019
1931abb
service_options was being used before being set.
mpereira Oct 10, 2019
956200c
Fix environment variable name.
mpereira Oct 10, 2019
976a775
installing jupiter
alexeygorobets Oct 10, 2019
5d25c89
Improve DSEngine workload deployment script and options.
mpereira Oct 10, 2019
78904b7
rename dsengine options to dsengine-options.jso
alexeygorobets Oct 11, 2019
275da0f
rename dsengine options to dsengine-options.json
alexeygorobets Oct 11, 2019
c52f1ed
add MWT-25 dry run and MWT-25 configs
alexeygorobets Sep 14, 2020
ae341c2
update cluster url for MWT26
alexeygorobets Sep 14, 2020
b474d78
update kafka version to 2.10.0-5.5.1-beta
alexeygorobets Sep 14, 2020
6cd6997
enable exernal volumes on kafka brokers
alexeygorobets Sep 14, 2020
030a3cf
update confluent-zk version to 2.8.0-5.5.1-beta
alexeygorobets Sep 14, 2020
ba74e60
update Cassandra to 2.10.0-3.11.6-beta
alexeygorobets Sep 14, 2020
05743fa
add external volumes to cassandra options
alexeygorobets Sep 14, 2020
610e6a3
use latest spark 2.11.0-2.4.6
rishabh96b Sep 29, 2020
6888ecc
update spark stub to universe-converter
rishabh96b Sep 29, 2020
3a4b36e
Merge branch 'dcos-58437-deploy-workloads-under-role-enforced-group' …
alexeygorobets Sep 29, 2020
6ffb082
Adds data science engine configs
farhan5900 Sep 29, 2020
288b97a
update configs for dry run
alexeygorobets Sep 29, 2020
853dbe5
use root user for services
alexeygorobets Sep 29, 2020
f087a33
Merge remote-tracking branch 'origin/mwt-26' into mwt-26
alexeygorobets Sep 29, 2020
c8e8fdc
Grant task:user:root permissions on master and agent for DSE.
kaiwalyajoshi Sep 29, 2020
7d85cbb
install DSE with other services
alexeygorobets Sep 30, 2020
8b71331
update dsengine options
alexeygorobets Sep 30, 2020
a8c6e56
Merge branch 'mwt-26' of https://github.com/mesosphere/spark-build in…
alexeygorobets Sep 30, 2020
083ebb9
fix DSE permissions
alexeygorobets Oct 1, 2020
2750a25
add MWT 26 config
alexeygorobets Oct 1, 2020
6032efa
not using GPU in DSE
alexeygorobets Oct 1, 2020
aaace06
fix marathon check for batched job
alexeygorobets Oct 1, 2020
d34af99
use right branch in batched workload for spark
alexeygorobets Oct 1, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
168 changes: 168 additions & 0 deletions scale-tests/configs/2020-05-14-mwt25dr.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# Depends on:
# - TEST_NAME
# - TEST_S3_BUCKET
# - TEST_S3_FOLDER

# Workload configuration #######################################################
#
# Total CPU quota: 88
# Total MEM quota: 200000
# Total GPU quota: 40

CLUSTER_URL="https://mw25dry.scaletesting.mesosphe.re/"
SECURITY="strict"

DCOS_CLI_URL="https://downloads.dcos.io/cli/releases/binaries/dcos/linux/x86-64/latest/dcos"

# Test configuration ###########################################################

SHOULD_INSTALL_INFRASTRUCTURE=true
SHOULD_INSTALL_NON_GPU_DISPATCHERS=true
SHOULD_INSTALL_GPU_DISPATCHERS=false
SHOULD_RUN_FINITE_STREAMING_JOBS=true
SHOULD_RUN_INFINITE_STREAMING_JOBS=true
SHOULD_RUN_BATCH_JOBS=true
SHOULD_RUN_GPU_BATCH_JOBS=false
SHOULD_UNINSTALL_INFRASTRUCTURE_AT_THE_END=false

# Infrastructure configuration #################################################

GROUP_NAME="${TEST_NAME}"

SERVICE_NAMES_PREFIX="${TEST_NAME}/"
INFRASTRUCTURE_OUTPUT_FILE="infrastructure.json"

KAFKA_CLUSTER_COUNT=1
CASSANDRA_CLUSTER_COUNT=1

ZOOKEEPER_CPUS=10
ZOOKEEPER_MEM=20000
ZOOKEEPER_CONFIG='scale-tests/configs/kafka-zookeeper-options.json'
# Note: empty package repo values will default to latest Universe packages.
ZOOKEEPER_PACKAGE_REPO=
# 2.7.0-5.1.2e from the Universe.

KAFKA_CPUS=10
KAFKA_MEM=20000
KAFKA_CONFIG='scale-tests/configs/kafka-options.json'
# Note: empty package repo values will default to latest Universe packages.
KAFKA_PACKAGE_REPO=
# 2.9.0-5.4.0 from the Universe.

CASSANDRA_CPUS=10
CASSANDRA_MEM=20000
CASSANDRA_CONFIG='scale-tests/configs/cassandra-options.json'
# Note: empty package repo values will default to latest Universe packages.
CASSANDRA_PACKAGE_REPO=
# 2.9.0-3.11.6 from the Universe.

# DSEngine configuration #######################################################

DSENGINE_CPUS=10
DSENGINE_MEM=20000
DSENGINE_GPUS=40
DSENGINE_PACKAGE_REPO=

# Spark configuration ##########################################################

SPARK_CONFIG='scale-tests/configs/spark-options.json'

# Note: empty package repo values will default to latest Universe packages.
# Spark version 2.10.0-2.4.5
SPARK_PACKAGE_REPO=https://infinity-artifacts.s3.amazonaws.com/permanent/spark/2.10.0-2.4.5/stub-universe-spark.json

# Note: leaving the Spark executor Docker image empty so that executors inherit
# the image used for dispatchers.
SPARK_EXECUTOR_DOCKER_IMAGE=

# Non-GPU Spark dispatchers configuration ######################################

# Not currently used.
BATCH_MAX_NON_GPU_JOBS=30

SPARK_NON_GPU_DISPATCHERS=3
SPARK_NON_GPU_DISPATCHERS_OUTPUT_FILE="non-gpu-dispatchers.out"
# Note: this name is built internally by the deploy-dispatchers.py script.
SPARK_NON_GPU_DISPATCHERS_JSON_OUTPUT_FILE="${SPARK_NON_GPU_DISPATCHERS_OUTPUT_FILE}-dispatchers.json"
# Note: driver resources used per dispatcher (1 dispatcher will be able to run
# 8 drivers since each driver requires 1 CPU).
SPARK_NON_GPU_QUOTA_DRIVERS_CPUS=8
SPARK_NON_GPU_QUOTA_DRIVERS_MEM=20000
# Note: executor resources used per job (1 driver will run 1 job).
SPARK_NON_GPU_QUOTA_EXECUTORS_CPUS=8
SPARK_NON_GPU_QUOTA_EXECUTORS_MEM=20000

# GPU Spark dispatchers configuration ##########################################

# Not currently used.
BATCH_MAX_GPU_JOBS=2

SPARK_GPU_DISPATCHERS=0
SPARK_GPU_DISPATCHERS_OUTPUT_FILE="gpu-dispatchers.out"
SPARK_GPU_DISPATCHERS_JSON_OUTPUT_FILE="${SPARK_GPU_DISPATCHERS_OUTPUT_FILE}-dispatchers.json" # NOTE: this name is built internally by the deploy-dispatchers.py script.
SPARK_GPU_QUOTA_DRIVERS_CPUS=
SPARK_GPU_QUOTA_DRIVERS_MEM=
SPARK_GPU_QUOTA_DRIVERS_GPUS=
SPARK_GPU_QUOTA_EXECUTORS_CPUS=
SPARK_GPU_QUOTA_EXECUTORS_MEM=
SPARK_GPU_QUOTA_EXECUTORS_GPUS=

# Common streaming jobs configuration ##########################################

TEST_ASSEMBLY_JAR_URL='http://infinity-artifacts.s3.amazonaws.com/scale-tests/dcos-spark-scala-tests-assembly-2.4.0-20190325.jar'
DISPATCHERS_JSON_OUTPUT_FILE="all-dispatchers.json"

# Finite streaming jobs configuration ##########################################

STREAMING_FINITE_SUBMISSIONS_OUTPUT_FILE="finite-submissions.out"
STREAMING_FINITE_PRODUCERS_PER_KAFKA="${SPARK_NON_GPU_DISPATCHERS}" # 1 Kafka and 3 dispatchers -> 3 producers.
STREAMING_FINITE_CONSUMERS_PER_PRODUCER=1 # 3 producers -> 3 consumers.
# 3 producers + 3 consumers = 6 total finite streaming jobs
STREAMING_FINITE_PRODUCER_NUMBER_OF_WORDS=7692
STREAMING_FINITE_PRODUCER_WORDS_PER_SECOND=1
# 7692 words / 1 word per second -> ~2h runtime.
STREAMING_FINITE_PRODUCER_SPARK_CORES_MAX=2
STREAMING_FINITE_PRODUCER_SPARK_EXECUTOR_CORES=2
STREAMING_FINITE_CONSUMER_BATCH_SIZE_SECONDS=10
STREAMING_FINITE_CONSUMER_SPARK_CORES_MAX=1
STREAMING_FINITE_CONSUMER_SPARK_EXECUTOR_CORES=1

# Infinite streaming jobs configuration ########################################

STREAMING_INFINITE_SUBMISSIONS_OUTPUT_FILE="infinite-submissions.out"
STREAMING_INFINITE_PRODUCERS_PER_KAFKA="${SPARK_NON_GPU_DISPATCHERS}" # 1 Kafka and 3 dispatchers -> 3 producers.
STREAMING_INFINITE_CONSUMERS_PER_PRODUCER=1 # 3 producers -> 3 consumers.
# 3 producers + 3 consumers = 6 total infinite streaming jobs
STREAMING_INFINITE_PRODUCER_NUMBER_OF_WORDS=0
STREAMING_INFINITE_PRODUCER_WORDS_PER_SECOND=1
STREAMING_INFINITE_PRODUCER_SPARK_CORES_MAX=2
STREAMING_INFINITE_PRODUCER_SPARK_EXECUTOR_CORES=2
STREAMING_INFINITE_CONSUMER_BATCH_SIZE_SECONDS=10
STREAMING_INFINITE_CONSUMER_SPARK_CORES_MAX=1
STREAMING_INFINITE_CONSUMER_SPARK_EXECUTOR_CORES=1

# Batch jobs configuration #####################################################

SPARK_NON_GPU_DISPATCHERS_JSON_OUTPUT_FILE_URL="https://${TEST_S3_BUCKET}.s3.amazonaws.com/${TEST_S3_FOLDER}/${SPARK_NON_GPU_DISPATCHERS_JSON_OUTPUT_FILE}"

BATCH_APP_ID="/${SERVICE_NAMES_PREFIX}batch-workload"
BATCH_SCRIPT_CPUS=6
BATCH_SCRIPT_MEM=12288
BATCH_SUBMITS_PER_MIN=13
# TODO: update to master for the next MWT.
BATCH_SPARK_BUILD_BRANCH="dcos-58437-deploy-workloads-under-role-enforced-group"

# Batch GPU jobs configuration #################################################

SPARK_GPU_DISPATCHERS_JSON_OUTPUT_FILE_URL="https://${TEST_S3_BUCKET}.s3.amazonaws.com/${TEST_S3_FOLDER}/${SPARK_GPU_DISPATCHERS_JSON_OUTPUT_FILE}"

GPU_APP_ID="/${SERVICE_NAMES_PREFIX}gpu-batch-workload"
GPU_SCRIPT_CPUS=2
GPU_SCRIPT_MEM=4096
GPU_DOCKER_IMAGE='samvantran/spark-dcos-gpu:metrics'
GPU_SUBMITS_PER_MIN=5
GPU_MAX_DISPATCHERS=${SPARK_GPU_DISPATCHERS}
GPU_SPARK_CORES_MAX=4
GPU_SPARK_MESOS_EXECUTOR_GPUS=4
GPU_SPARK_MESOS_MAX_GPUS=4
GPU_SPARK_BUILD_BRANCH=master
168 changes: 168 additions & 0 deletions scale-tests/configs/2020-05-20-mwt25.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
# Depends on:
# - TEST_NAME
# - TEST_S3_BUCKET
# - TEST_S3_FOLDER

# Workload configuration #######################################################
#
# Total CPU quota: 2290
# Total MEM quota: 4580000
# Total GPU quota: 40

CLUSTER_URL="https://mw25dry.scaletesting.mesosphe.re/"
SECURITY="strict"

DCOS_CLI_URL="https://downloads.dcos.io/cli/releases/binaries/dcos/linux/x86-64/latest/dcos"

# Test configuration ###########################################################

SHOULD_INSTALL_INFRASTRUCTURE=true
SHOULD_INSTALL_NON_GPU_DISPATCHERS=true
SHOULD_INSTALL_GPU_DISPATCHERS=false
SHOULD_RUN_FINITE_STREAMING_JOBS=true
SHOULD_RUN_INFINITE_STREAMING_JOBS=true
SHOULD_RUN_BATCH_JOBS=true
SHOULD_RUN_GPU_BATCH_JOBS=false
SHOULD_UNINSTALL_INFRASTRUCTURE_AT_THE_END=false

# Infrastructure configuration #################################################

GROUP_NAME="${TEST_NAME}"

SERVICE_NAMES_PREFIX="${TEST_NAME}/"
INFRASTRUCTURE_OUTPUT_FILE="infrastructure.json"

KAFKA_CLUSTER_COUNT=1
CASSANDRA_CLUSTER_COUNT=1

ZOOKEEPER_CPUS=10
ZOOKEEPER_MEM=20000
ZOOKEEPER_CONFIG='scale-tests/configs/kafka-zookeeper-options.json'
# Note: empty package repo values will default to latest Universe packages.
ZOOKEEPER_PACKAGE_REPO=
# 2.7.0-5.1.2e from the Universe.

KAFKA_CPUS=10
KAFKA_MEM=20000
KAFKA_CONFIG='scale-tests/configs/kafka-options.json'
# Note: empty package repo values will default to latest Universe packages.
KAFKA_PACKAGE_REPO=
# 2.9.0-5.4.0 from the Universe.

CASSANDRA_CPUS=10
CASSANDRA_MEM=20000
CASSANDRA_CONFIG='scale-tests/configs/cassandra-options.json'
# Note: empty package repo values will default to latest Universe packages.
CASSANDRA_PACKAGE_REPO=
# 2.9.0-3.11.6 from the Universe.

# DSEngine configuration #######################################################

DSENGINE_CPUS=10
DSENGINE_MEM=20000
DSENGINE_GPUS=40
DSENGINE_PACKAGE_REPO=

# Spark configuration ##########################################################

SPARK_CONFIG='scale-tests/configs/spark-options.json'

# Note: empty package repo values will default to latest Universe packages.
# Spark version 2.10.0-2.4.5
SPARK_PACKAGE_REPO=https://infinity-artifacts.s3.amazonaws.com/permanent/spark/2.10.0-2.4.5/stub-universe-spark.json

# Note: leaving the Spark executor Docker image empty so that executors inherit
# the image used for dispatchers.
SPARK_EXECUTOR_DOCKER_IMAGE=

# Non-GPU Spark dispatchers configuration ######################################

# Not currently used.
BATCH_MAX_NON_GPU_JOBS=1000

SPARK_NON_GPU_DISPATCHERS=50
SPARK_NON_GPU_DISPATCHERS_OUTPUT_FILE="non-gpu-dispatchers.out"
# Note: this name is built internally by the deploy-dispatchers.py script.
SPARK_NON_GPU_DISPATCHERS_JSON_OUTPUT_FILE="${SPARK_NON_GPU_DISPATCHERS_OUTPUT_FILE}-dispatchers.json"
# Note: driver resources used per dispatcher (1 dispatcher will be able to run
# 20 drivers since each driver requires 1 CPU).
SPARK_NON_GPU_QUOTA_DRIVERS_CPUS=20
SPARK_NON_GPU_QUOTA_DRIVERS_MEM=50000
# Note: executor resources used per job (1 driver will run 1 job).
SPARK_NON_GPU_QUOTA_EXECUTORS_CPUS=25
SPARK_NON_GPU_QUOTA_EXECUTORS_MEM=40000

# GPU Spark dispatchers configuration ##########################################

# Not currently used.
BATCH_MAX_GPU_JOBS=10

SPARK_GPU_DISPATCHERS=0
SPARK_GPU_DISPATCHERS_OUTPUT_FILE="gpu-dispatchers.out"
SPARK_GPU_DISPATCHERS_JSON_OUTPUT_FILE="${SPARK_GPU_DISPATCHERS_OUTPUT_FILE}-dispatchers.json" # NOTE: this name is built internally by the deploy-dispatchers.py script.
SPARK_GPU_QUOTA_DRIVERS_CPUS=
SPARK_GPU_QUOTA_DRIVERS_MEM=
SPARK_GPU_QUOTA_DRIVERS_GPUS=
SPARK_GPU_QUOTA_EXECUTORS_CPUS=
SPARK_GPU_QUOTA_EXECUTORS_MEM=
SPARK_GPU_QUOTA_EXECUTORS_GPUS=
SPARK_GPU_REMOVE_EXECUTORS_ROLES_QUOTAS=true
# Common streaming jobs configuration ##########################################

TEST_ASSEMBLY_JAR_URL='http://infinity-artifacts.s3.amazonaws.com/scale-tests/dcos-spark-scala-tests-assembly-2.4.0-20190325.jar'
DISPATCHERS_JSON_OUTPUT_FILE="all-dispatchers.json"

# Finite streaming jobs configuration ##########################################

STREAMING_FINITE_SUBMISSIONS_OUTPUT_FILE="finite-submissions.out"
STREAMING_FINITE_PRODUCERS_PER_KAFKA="${SPARK_NON_GPU_DISPATCHERS}" # 1 Kafka and 50 dispatchers -> 50 producers.
STREAMING_FINITE_CONSUMERS_PER_PRODUCER=1 # 50 producers -> 50 consumers.
# 50 producers + 50 consumers = 100 total finite streaming jobs
STREAMING_FINITE_PRODUCER_NUMBER_OF_WORDS=7692
STREAMING_FINITE_PRODUCER_WORDS_PER_SECOND=1
# 7692 words / 1 word per second -> ~2h runtime.
STREAMING_FINITE_PRODUCER_SPARK_CORES_MAX=2
STREAMING_FINITE_PRODUCER_SPARK_EXECUTOR_CORES=2
STREAMING_FINITE_CONSUMER_BATCH_SIZE_SECONDS=10
STREAMING_FINITE_CONSUMER_SPARK_CORES_MAX=1
STREAMING_FINITE_CONSUMER_SPARK_EXECUTOR_CORES=1

# Infinite streaming jobs configuration ########################################

STREAMING_INFINITE_SUBMISSIONS_OUTPUT_FILE="infinite-submissions.out"
STREAMING_INFINITE_PRODUCERS_PER_KAFKA="${SPARK_NON_GPU_DISPATCHERS}" # 1 Kafka and 50 dispatchers -> 50 producers.
STREAMING_INFINITE_CONSUMERS_PER_PRODUCER=1 # 50 producers -> 50 consumers.
# 50 producers + 50 consumers = 100 total infinite streaming jobs
STREAMING_INFINITE_PRODUCER_NUMBER_OF_WORDS=0
STREAMING_INFINITE_PRODUCER_WORDS_PER_SECOND=1
STREAMING_INFINITE_PRODUCER_SPARK_CORES_MAX=2
STREAMING_INFINITE_PRODUCER_SPARK_EXECUTOR_CORES=2
STREAMING_INFINITE_CONSUMER_BATCH_SIZE_SECONDS=10
STREAMING_INFINITE_CONSUMER_SPARK_CORES_MAX=1
STREAMING_INFINITE_CONSUMER_SPARK_EXECUTOR_CORES=1

# Batch jobs configuration #####################################################

SPARK_NON_GPU_DISPATCHERS_JSON_OUTPUT_FILE_URL="https://${TEST_S3_BUCKET}.s3.amazonaws.com/${TEST_S3_FOLDER}/${SPARK_NON_GPU_DISPATCHERS_JSON_OUTPUT_FILE}"

BATCH_APP_ID="/${SERVICE_NAMES_PREFIX}batch-workload"
BATCH_SCRIPT_CPUS=6
BATCH_SCRIPT_MEM=12288
BATCH_SUBMITS_PER_MIN=13
# TODO: update to master for the next MWT.
BATCH_SPARK_BUILD_BRANCH="dcos-58437-deploy-workloads-under-role-enforced-group"

# Batch GPU jobs configuration #################################################

SPARK_GPU_DISPATCHERS_JSON_OUTPUT_FILE_URL="https://${TEST_S3_BUCKET}.s3.amazonaws.com/${TEST_S3_FOLDER}/${SPARK_GPU_DISPATCHERS_JSON_OUTPUT_FILE}"

GPU_APP_ID="/${SERVICE_NAMES_PREFIX}gpu-batch-workload"
GPU_SCRIPT_CPUS=2
GPU_SCRIPT_MEM=4096
GPU_DOCKER_IMAGE='samvantran/spark-dcos-gpu:metrics'
GPU_SUBMITS_PER_MIN=5
GPU_MAX_DISPATCHERS=${SPARK_GPU_DISPATCHERS}
GPU_SPARK_CORES_MAX=4
GPU_SPARK_MESOS_EXECUTOR_GPUS=4
GPU_SPARK_MESOS_MAX_GPUS=4
GPU_SPARK_BUILD_BRANCH=master
Loading