New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-33288][YARN][FOLLOW-UP][test-hadoop2.7] Fix type mismatch error #30375
Conversation
@@ -317,7 +317,7 @@ private[yarn] class YarnAllocator( | |||
customSparkResources | |||
} | |||
val resource = | |||
Resource.newInstance(resourcesWithDefaults.totalMemMiB, resourcesWithDefaults.cores) | |||
Resource.newInstance(resourcesWithDefaults.totalMemMiB.toInt, resourcesWithDefaults.cores) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test build #131090 has finished for PR 30375 at commit
|
cc @tgravescs |
Kubernetes integration test starting |
Kubernetes integration test status failure |
retest this please. |
Test build #131093 has finished for PR 30375 at commit
|
Kubernetes integration test starting |
Given the value is in MB's, perhaps we should maintain it as |
Kubernetes integration test status success |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @wangyum .
Ya. Hadoop 2.7 profile is easily outdated in these days. Given that, hopefully, Apache Spark 3.1 can give more benefits and help to migrate to Hadoop 3.2+ and we can remove Hadoop 2.7 maintenance burden eventually.
BTW, @wangyum and @mridulm and @tgravescs . Do you think it's possible for us to start discussion for dropping Hadoop 2.7 at Apache Spark 3.2?
Is the proposal to drop 2.7 and move to 2.10 ? Or to drop 2.x entirely and move to hadoop 3.x ? Hadoop 2.7.4, which we use, was released 3 years back - while 2.10 was released 1 year back. Assuming we are still supporting 2.10, will the issues we are facing get addressed ? |
My initial question was about dropping If you think so, I will focus on supporting Hadoop 2.x LTS officially in Apache Spark 3 too. |
I made a PR to protect Hadoop 2 profile. Could you review that, @mridulm ? |
+1 for supporting Hadoop 2.x LTS officially in Apache Spark 3. |
Merged to master. |
Thank you, @wangyum . |
### What changes were proposed in this pull request? This PR aims to protect `Hadoop 2.x` profile compilation in Apache Spark 3.1+. ### Why are the changes needed? Since Apache Spark 3.1+ switch our default profile to Hadoop 3, we had better prevent at least compilation error with `Hadoop 2.x` profile at the PR review phase. Although this is an additional workload, it will finish quickly because it's compilation only. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the GitHub Action. - This should be merged after #30375 . Closes #30378 from dongjoon-hyun/SPARK-33454. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: HyukjinKwon <gurwls223@apache.org>
thanks for fixing, I don't think we should drop Hadoop 2.x, I think to many people are still using it. |
@tgravescs any thoughts on this comment ? |
sorry missed your comment @mridulm yeah we can just use int since this are all MiB, I'll file a followup and put up a PR. |
What changes were proposed in this pull request?
This pr fix type mismatch error:
Why are the changes needed?
Fix compile issue.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Existing test.