Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-47488][k8s]fix driver pod stuck when driver on k8s #45667

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
18 changes: 17 additions & 1 deletion core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ import org.apache.spark.internal.config.UI._
import org.apache.spark.launcher.SparkLauncher
import org.apache.spark.util._
import org.apache.spark.util.ArrayImplicits._
import org.apache.spark.util.SparkExitCode.EXIT_FAILURE

/**
* Whether to submit, kill, or request the status of an application.
Expand Down Expand Up @@ -983,11 +984,19 @@ private[spark] class SparkSubmit extends Logging {
e
}

var DriverPodIsNormal: Boolean = if (args.master.startsWith("k8s")) true else false
var driverThrow: Throwable = null
try {
app.start(childArgs.toArray, sparkConf)
} catch {
case t: Throwable =>
throw findCause(t)
logWarning("Some ERR/Exception happened when app is running.")
if (args.master.startsWith("k8s")) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you want to use DriverPodIsNormal here?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, DriverPodIsNormal applied here may be better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then, please use it.

DriverPodIsNormal = false
driverThrow = t
} else {
throw findCause(t)
}
} finally {
if (args.master.startsWith("k8s") && !isShell(args.primaryResource) &&
!isSqlShell(args.mainClass) && !isThriftServer(args.mainClass) &&
Expand All @@ -996,6 +1005,13 @@ private[spark] class SparkSubmit extends Logging {
SparkContext.getActive.foreach(_.stop())
} catch {
case e: Throwable => logError(s"Failed to close SparkContext: $e")
} finally {
if (SparkContext.getActive.isEmpty) {
if (!DriverPodIsNormal) {
logError(s"Driver Pod will exit because: $driverThrow")
System.exit(EXIT_FAILURE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you provide the YARN code link for this case? Specifically,

  1. Does YARN job fail with EXIT_FAILURE ?
  2. If this PR is for consistency between YARN and K8s, we should have the same exit code and same error message.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is yarn exit code.
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
It seems applicationmaster adopts it's own failure exit code, which not suitable for driver pod.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the YARN exit code for your case? According to your PR description, it's not clear, @littlelittlewhite09 . It would be described in the PR description.

When spark app runs on yarn-cluster mode, everything is ok. Driver can terminate normally if encounters exception or err.

}
}
}
}
}
Expand Down