Skip to content

Conversation

@CrazyJvm
Copy link
Contributor

@CrazyJvm CrazyJvm commented Jul 2, 2014

According to

  private val maxNumExecutorFailures = sparkConf.getInt("spark.yarn.max.executor.failures",
    sparkConf.getInt("spark.yarn.max.worker.failures", math.max(args.numExecutors * 2, 3)))

default value should be numExecutors * 2, with minimum of 3, and it's same to the config
spark.yarn.max.worker.failures

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16292/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we haven't been listing deprecated configs as we don't want people to continue to use them.

It would be nice if we had somewhere people could go to see the list of deprecated configs and new mappings though. But that is a separate jira.

@tgravescs
Copy link
Contributor

Please file a jira to match this change.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@CrazyJvm CrazyJvm changed the title fix spark.yarn.max.executor.failures explaination SPARK-2400 : fix spark.yarn.max.executor.failures explaination Jul 8, 2014
@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/16395/

@asfgit asfgit closed this in b520b64 Jul 8, 2014
@tgravescs
Copy link
Contributor

Thanks looks good. +1

gzm55 pushed a commit to MediaV/spark that referenced this pull request Jul 18, 2014
According to
```scala
  private val maxNumExecutorFailures = sparkConf.getInt("spark.yarn.max.executor.failures",
    sparkConf.getInt("spark.yarn.max.worker.failures", math.max(args.numExecutors * 2, 3)))
```
default value should be numExecutors * 2, with minimum of 3,  and it's same to the config
`spark.yarn.max.worker.failures`

Author: CrazyJvm <crazyjvm@gmail.com>

Closes apache#1282 from CrazyJvm/yarn-doc and squashes the following commits:

1a5f25b [CrazyJvm] remove deprecated config
c438aec [CrazyJvm] fix style
86effa6 [CrazyJvm] change expression
211f130 [CrazyJvm] fix html tag
2900d23 [CrazyJvm] fix style
a4b2e27 [CrazyJvm] fix configuration spark.yarn.max.executor.failures
xiliu82 pushed a commit to xiliu82/spark that referenced this pull request Sep 4, 2014
According to
```scala
  private val maxNumExecutorFailures = sparkConf.getInt("spark.yarn.max.executor.failures",
    sparkConf.getInt("spark.yarn.max.worker.failures", math.max(args.numExecutors * 2, 3)))
```
default value should be numExecutors * 2, with minimum of 3,  and it's same to the config
`spark.yarn.max.worker.failures`

Author: CrazyJvm <crazyjvm@gmail.com>

Closes apache#1282 from CrazyJvm/yarn-doc and squashes the following commits:

1a5f25b [CrazyJvm] remove deprecated config
c438aec [CrazyJvm] fix style
86effa6 [CrazyJvm] change expression
211f130 [CrazyJvm] fix html tag
2900d23 [CrazyJvm] fix style
a4b2e27 [CrazyJvm] fix configuration spark.yarn.max.executor.failures
wangyum pushed a commit that referenced this pull request May 26, 2023
…discovery is disabled (#1282)

* [CARMEL-6675] Support to enable decommission nodes when hive server2 service discovery is disabled
ashevchuk123 pushed a commit to mapr/spark that referenced this pull request Oct 27, 2025
…dled as par of mapr-spark-3.5.5.0.202503201540-1.noarch.rpm (apache#1282)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants