-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SUPPORT]Performance degrade for migrating from Hudi 0.7 to Hudi 0.14 #11274
Comments
|
Hi @KnightChess Thanks for commenting. But my major doubt is why shuffle write is nearly doubled in hudi 0.14? And that is leading to issues in step 121 |
@bibhu107 if next parallelism is too bigger, shuffle data will grow. And the mayjor reason is task is too much because high, spark need to scheduler too mush task. |
@bibhu107 and why shuffle data grow, I haven't looked at the code in detail; the following is just my guess. you have too much reducer, so the shuffle data may be need more meta. And on the other head, 0.7 -> 0.14, may be the shuffle's java object attr has change, this alse can cause diff. But I think parallelism is the major problem. |
@bibhu107 does it can work for you? I misread the stack trace, so this parameter hasn't taken effect in the question stage. The parameter that needs to be set is a different one. If you are |
Hello @KnightChess, thank you for your suggestions. Initially, the Adaptive Query Execution (AQE) feature was disabled for the jobs because we were explicitly setting
Additionally, we removed the We will share the results once we perform load testing. For now, we have moved back to Hudi 0.8 and using Spark3.3.1. Thank you for raising the PR. |
I have one query: Why do we need this PR? I expected Hudi to automatically take the deduced parallelism from Hudi 0.13. As mentioned in the documentation for hoodie.upsert.shuffle.parallelism, it states:
|
@bibhu107 hi, this pr is target to imporve the deduced parallelism and more user friendly, in hudi side, AQE can not effort because use rdd directly |
I am closing this Issue. Thanks for support @KnightChess |
Hi Team,
I am upgrading my Spark EMR jobs FROM [Spark 2.4.8, EMR-5.36.1, Hudi 0.7] TO [Spark 3.3.1, EMR 6.10.1, and Hudi 0.14]. This upgrade is leading to a 230% performance degradation. Previously, the jobs were running in 18 minutes, but now they are taking over an hour to complete. I'm sharing screenshots below for reference. There have been no code changes apart from upgrading the dependencies. I am writing to the Hudi table in version 0.14 the same way as I did in version 0.7. For this upgrade, I have created a new copy-on-write table and am using the Simple Indexing approach.
Could you please help me debug this issue or suggest any additional configurations that might be required to improve performance?
The text was updated successfully, but these errors were encountered: