Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-20242][Web UI] Add spark.ui.stopDelay #17551

Closed
wants to merge 1 commit into from

Conversation

barnardb
Copy link
Contributor

@barnardb barnardb commented Apr 6, 2017

What changes were proposed in this pull request?

Adds a spark.ui.stopDelay configuration property that can be used to keep the UI running when an application has finished. This is very useful for debugging, especially when the driver application is running remotely.

How was this patch tested?

This patch was tested manually. E.g. here's a screenshot from bin/spark-submit run-example --conf spark.ui.stopDelay=30s SparkPi 100:

image

This property can be used to keep the UI running when an application
has finished. This can be very useful for debugging.
@srowen
Copy link
Member

srowen commented Apr 6, 2017

I would oppose this change. It is what the history server is for. Or also profiling tools, or simple debugging mechanisms you allude to. It doesn't belong as yet another flag in Spark

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@jerryshao
Copy link
Contributor

Agree with @srowen , the proposed solution overlaps the key functionality of history server. Usually we should stop the app and release the resources as soon as application finished. This solution unnecessarily block the whole stop chain and potentially introduce some issues when trying to visit objects that have already stopped.

@ajbozarth
Copy link
Member

I agree with @srowen and @jerryshao this is what the history server is for

@barnardb
Copy link
Contributor Author

barnardb commented Apr 6, 2017

Our use case involves jobs running in a remote cluster without a Spark master. I agree that the history server is the better to solve this, but we'd like to get a solution that doesn't depend on a distributed filesystem. Would you be willing to consider a pull request for https://issues.apache.org/jira/browse/SPARK-19802 (sending events to a remote history server) instead?

@vanzin
Copy link
Contributor

vanzin commented Apr 6, 2017

Our use case involves jobs running in a remote cluster without a Spark master.

It's still running your code, right? Why can't you add a configuration to your own code that tells it to wait some time before shutting down the SparkContext? That would achieve the same goal without changing Spark.

@barnardb
Copy link
Contributor Author

barnardb commented Apr 6, 2017

It's still running your code, right? Why can't you add a configuration to your own code that tells it to wait some time before shutting down the SparkContext?

We're trying to support arbitrary jobs running on the cluster, to make it easy for users to inspect the jobs that they run there. This was a quick way to achieve that, but I agree with the other commenters that this quite hacky, and that the history server would be a nicer solution. Our problem with the history server right now is that while the current driver-side EventLoggingListener + history-server-side FsHistoryProvider implementations are great for environments with HDFS, they're much less convenient in a cluster without a distributed filesystem. I'd propose that I close this PR, and work on an RPC-based listener-provider combination to use with the history server.

@jerryshao
Copy link
Contributor

@barnardb only in Spark standalone mode HistoryServer is embedded into Master process for convenience IIRC. You can always start a standalone HistoryServer process.

Also FsHistoryProvider is not bound to HDFS, other Hadoop compatible FS could be supported, like wasb, s3 and other object stores that has Hadoop FS compatible layer. I would think even in your cluster environment (k8s), you probably have a object store. And at least you could implement a customized ApplicationHistoryProvider.

@maropu maropu mentioned this pull request Apr 23, 2017
maropu added a commit to maropu/spark that referenced this pull request Apr 23, 2017
@asfgit asfgit closed this in e9f9715 Apr 24, 2017
@barnardb barnardb deleted the ui-defer-stop-SPARK-20242 branch September 26, 2017 16:19
@barnardb barnardb restored the ui-defer-stop-SPARK-20242 branch September 26, 2017 16:19
@barnardb barnardb deleted the ui-defer-stop-SPARK-20242 branch September 26, 2017 16:20
peter-toth pushed a commit to peter-toth/spark that referenced this pull request Oct 6, 2018
This pr proposed to close stale PRs. Currently, we have 400+ open PRs and there are some stale PRs whose JIRA tickets have been already closed and whose JIRA tickets does not exist (also, they seem not to be minor issues).

// Open PRs whose JIRA tickets have been already closed
Closes apache#11785
Closes apache#13027
Closes apache#13614
Closes apache#13761
Closes apache#15197
Closes apache#14006
Closes apache#12576
Closes apache#15447
Closes apache#13259
Closes apache#15616
Closes apache#14473
Closes apache#16638
Closes apache#16146
Closes apache#17269
Closes apache#17313
Closes apache#17418
Closes apache#17485
Closes apache#17551
Closes apache#17463
Closes apache#17625

// Open PRs whose JIRA tickets does not exist and they are not minor issues
Closes apache#10739
Closes apache#15193
Closes apache#15344
Closes apache#14804
Closes apache#16993
Closes apache#17040
Closes apache#15180
Closes apache#17238

N/A

Author: Takeshi Yamamuro <yamamuro@apache.org>

Closes apache#17734 from maropu/resolved_pr.

Change-Id: Id2e590aa7283fe5ac01424d30a40df06da6098b5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants