Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-24794][CORE] Driver launched through rest should use all masters
## What changes were proposed in this pull request? In standalone cluster mode, one could launch driver with supervise mode enabled. StandaloneRestServer class uses the host and port of current master as the spark.master property while launching the driver (even if you are running in HA mode). This class also ignores the spark.master property passed as part of the request. Due to the above problem, if the Spark masters switch due to some reason and your driver is killed unexpectedly and relaunched, it will try to connect to the master which is in the driver command specified as -Dspark.master. But this master will be in STANDBY mode and after trying multiple times, the SparkContext will kill itself (even though secondary master was alive and healthy). This change picks the spark.master property from request and uses it to launch the driver process. Due to this, the driver process has both masters in -Dspark.master property. Even if the masters switch, SparkContext can still connect to the ALIVE master and work correctly. ## How was this patch tested? This patch was manually tested on a standalone cluster running 2.2.1. It was rebased on current master and all tests were executed. I have added a unit test for this change (but since I am new I hope I have covered all). Closes #21816 from bsikander/rest_driver_fix. Authored-by: Behroz Sikander <behroz.sikander@sap.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
- Loading branch information