-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-10330] Add Scalastyle rule to require use of SparkHadoopUtil JobContext methods #8521
Conversation
Test build #41788 has finished for PR 8521 at commit
|
Jenkins, retest this please. |
(Re-testing pre-emptively in case of flakiness) |
Test build #41789 has finished for PR 8521 at commit
|
Test build #41792 has finished for PR 8521 at commit
|
@@ -858,7 +858,7 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli | |||
// Use setInputPaths so that wholeTextFiles aligns with hadoopFile/textFile in taking | |||
// comma separated files as input. (see SPARK-7155) | |||
NewFileInputFormat.setInputPaths(job, path) | |||
val updateConf = job.getConfiguration | |||
val updateConf = SparkHadoopUtil.get.getConfigurationFromJobContext(job) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@JoshRosen I didn't change job.getConfiguration
last time because they uses org.apache.hadoop.mapreduce.Job
whose getConfiguration
is compatible. However, +1 for this change since it is good for maintenance.
Jenkins, retest this please. |
Test build #42179 has finished for PR 8521 at commit
|
Since this still passes tests, can we merge this now? |
Jenkins, retest this please. |
Going to retest and will merge once tests pass again. |
Test build #42296 has finished for PR 8521 at commit
|
LGTM |
Jenkins, retest this please. |
Going to merge this into master. |
Test build #42377 has finished for PR 8521 at commit
|
This is a followup to #8499 which adds a Scalastyle rule to mandate the use of SparkHadoopUtil's JobContext accessor methods and fixes the existing violations.