Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-13776][WebUI]Limit the max number of acceptors and selectors for Jetty #11615

Closed
wants to merge 4 commits into from
Closed

Conversation

zsxwing
Copy link
Member

@zsxwing zsxwing commented Mar 9, 2016

What changes were proposed in this pull request?

As each acceptor/selector in Jetty will use one thread, the number of threads should at least be the number of acceptors and selectors plus 1. Otherwise, the thread pool of Jetty server may be exhausted by acceptors/selectors and not be able to response any request.

To avoid wasting threads, the PR limits the max number of acceptors and selectors and also updates the max thread number if necessary.

How was this patch tested?

Just make sure we don't break any existing tests

@zsxwing zsxwing changed the title Add spark.ui.threads to set the max number of the UI threads [SPARK-13776][WebUI]Add spark.ui.threads to set the max number of the UI threads Mar 9, 2016
@zsxwing
Copy link
Member Author

zsxwing commented Mar 9, 2016

Another option is just hard-core the number of acceptors to a small number. I don't think there will be a lot of connections to access the Web UI at the same time.

@SparkQA
Copy link

SparkQA commented Mar 10, 2016

Test build #52768 has finished for PR 11615 at commit ce6cad3.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

server.setConnectors(connectors.toArray)

val pool = new QueuedThreadPool
var maxThreads = conf.getInt("spark.ui.threads", pool.getMaxThreads)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we even need config here? when would I otherwise want to control this? It seems like you want to always make the pool have at least acceptors+1 threads.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was worried that 1 thread for processing requests may be not enough. Then people have a configuration to increase it.

@srowen what do you think about just hard-core the max number of acceptors to a small number? Maybe just 8?

The default configuration of Jetty seems good to a web application that supports massive connections. However, Spark Web UI doesn't need to support so many users, so it's a waste to create many threads.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, if it's just for web UIs then I see no need for a large number of acceptors. I think your change is still needed to make sure the pool has enough threads in any event? though maybe the config is unnecessary.

@zsxwing
Copy link
Member Author

zsxwing commented Mar 10, 2016

Updated to use a small acceptor number

// As each Acceptor will use one thread, the number of threads should at least be the number
// of acceptors plus 1. (See SPARK-13776)
var minThreads = 1
connectors.collect { case c: AbstractConnector => c }.foreach { c =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the Scala math class is math vs Math but I wouldn't change it just for that. I think this looks like the right kind of approach.

@SparkQA
Copy link

SparkQA commented Mar 10, 2016

Test build #52848 has finished for PR 11615 at commit ea99768.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

val connector = c.asInstanceOf[SelectChannelConnector]
// Limit the max acceptor number to 8 so that we don't waste a lot of threads
connector.setAcceptors(math.min(connector.getAcceptors, 8))
// The number of selectors always equals to the number of acceptors
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SparkQA
Copy link

SparkQA commented Mar 11, 2016

Test build #52872 has finished for PR 11615 at commit 8422f6e.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@zsxwing
Copy link
Member Author

zsxwing commented Mar 11, 2016

retest this please

@SparkQA
Copy link

SparkQA commented Mar 11, 2016

Test build #52908 has finished for PR 11615 at commit 8422f6e.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@zsxwing
Copy link
Member Author

zsxwing commented Mar 11, 2016

retest this please

@zsxwing zsxwing changed the title [SPARK-13776][WebUI]Add spark.ui.threads to set the max number of the UI threads [SPARK-13776][WebUI]Limit the max number of acceptors and selectors for Jetty Mar 11, 2016
// As each acceptor and each selector will use one thread, the number of threads should at
// least be the number of acceptors and selectors plus 1. (See SPARK-13776)
var minThreads = 1
connectors.foreach { c =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use collect or "cast" using pattern matching on type.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to crash the app if we add a non-SelectChannelConnector in future. It's better than being silent.

@SparkQA
Copy link

SparkQA commented Mar 11, 2016

Test build #52931 has finished for PR 11615 at commit 8422f6e.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Mar 11, 2016

Test build #52949 has finished for PR 11615 at commit e772aed.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@zsxwing
Copy link
Member Author

zsxwing commented Mar 15, 2016

@srowen what do you think now?

@@ -134,7 +134,7 @@ private[spark] abstract class WebUI(
def bind() {
assert(!serverInfo.isDefined, "Attempted to bind %s more than once!".format(className))
try {
var host = Option(conf.getenv("SPARK_LOCAL_IP")).getOrElse("0.0.0.0")
val host = Option(conf.getenv("SPARK_LOCAL_IP")).getOrElse("0.0.0.0")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK but was this change intentional? rest seems OK

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, never mind. I will remove it.

@SparkQA
Copy link

SparkQA commented Mar 15, 2016

Test build #53221 has finished for PR 11615 at commit 068887b.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@zsxwing
Copy link
Member Author

zsxwing commented Mar 15, 2016

retest this please

@SparkQA
Copy link

SparkQA commented Mar 15, 2016

Test build #53232 has finished for PR 11615 at commit 068887b.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@zsxwing
Copy link
Member Author

zsxwing commented Mar 16, 2016

retest this please

@SparkQA
Copy link

SparkQA commented Mar 16, 2016

Test build #53249 has finished for PR 11615 at commit 068887b.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member

srowen commented Mar 17, 2016

LGTM, and the original reporter confirmed it works. I'm merging to master. I'm open to an argument that this should go into 1.6 though the scope of the problem is quite limited, so maybe isn't worth the subtle behavior change.

@asfgit asfgit closed this in 65b75e6 Mar 17, 2016
@zsxwing zsxwing deleted the SPARK-13776 branch March 17, 2016 17:03
@zsxwing
Copy link
Member Author

zsxwing commented Mar 17, 2016

Agree that this doesn't need to go into 1.6.

roygao94 pushed a commit to roygao94/spark that referenced this pull request Mar 22, 2016
…for Jetty

## What changes were proposed in this pull request?

As each acceptor/selector in Jetty will use one thread, the number of threads should at least be the number of acceptors and selectors plus 1. Otherwise, the thread pool of Jetty server may be exhausted by acceptors/selectors and not be able to response any request.

To avoid wasting threads, the PR limits the max number of acceptors and selectors and also updates the max thread number if necessary.

## How was this patch tested?

Just make sure we don't break any existing tests

Author: Shixiong Zhu <shixiong@databricks.com>

Closes apache#11615 from zsxwing/SPARK-13776.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants