Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-19320][MESOS][WIP]allow specifying a hard limit on number of gpus required in each spark executor when running on mesos #17979

Closed
wants to merge 8 commits into from

Conversation

yanji84
Copy link

@yanji84 yanji84 commented May 15, 2017

What changes were proposed in this pull request?

Currently, Spark only allows specifying overall gpu resources as an upper limit, this adds a new conf parameter to allow specifying a hard limit on the number of gpu cores for each executor while still respecting the overall gpu resource constraint

How was this patch tested?

Unit testing

Please review http://spark.apache.org/contributing.html before opening a pull request.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@grisaitis
Copy link

@yanji84 how is this different from your other PR, #17235?

I'm really interested in using this. I'll try testing it out on a GPU-enabled mesos cluster in the coming week.

@HyukjinKwon
Copy link
Member

@yanji84, is this PR active? if so, would you answer to the question above?

@kaluzniacki
Copy link

I am looking for a way to assign one GPU per RDD partition. I'll look at this patch in more detail, but a what I am fighting now is all GPUs on one executor are getting assigned to a single Mesos task rather than 1 GPU per Mesos task, and I think Mesos task maps to partition.

@yanji84
Copy link
Author

yanji84 commented Mar 21, 2018

We are really in need of this. Can we reopen this?

@susanxhuynh
Copy link
Contributor

I'm interested in this PR, too. Who has permission to reopen this? cc @HyukjinKwon @yanji84

@felixcheung
Copy link
Member

@yanji84 you are the author, do you see the option to reopen this PR in github.com?
if not, feel free to open a new PR and @ me there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
7 participants