Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SUPPORT]Performance degrade for migrating from Hudi 0.7 to Hudi 0.14 #11274

Closed
bibhu107 opened this issue May 23, 2024 · 10 comments
Closed

[SUPPORT]Performance degrade for migrating from Hudi 0.7 to Hudi 0.14 #11274

bibhu107 opened this issue May 23, 2024 · 10 comments

Comments

@bibhu107
Copy link

Hi Team,

I am upgrading my Spark EMR jobs FROM [Spark 2.4.8, EMR-5.36.1, Hudi 0.7] TO [Spark 3.3.1, EMR 6.10.1, and Hudi 0.14]. This upgrade is leading to a 230% performance degradation. Previously, the jobs were running in 18 minutes, but now they are taking over an hour to complete. I'm sharing screenshots below for reference. There have been no code changes apart from upgrading the dependencies. I am writing to the Hudi table in version 0.14 the same way as I did in version 0.7. For this upgrade, I have created a new copy-on-write table and am using the Simple Indexing approach.

Could you please help me debug this issue or suggest any additional configurations that might be required to improve performance?

spark3

spark-2

@KnightChess
Copy link
Contributor

hoodie.simple.index.parallelism can be modified to adjust the parallelism of stage 121, but it may cause the parallelism of stage 120 to decrease. You can give it a try.

@bibhu107
Copy link
Author

Hi @KnightChess Thanks for commenting. But my major doubt is why shuffle write is nearly doubled in hudi 0.14? And that is leading to issues in step 121

@KnightChess
Copy link
Contributor

@bibhu107 if next parallelism is too bigger, shuffle data will grow. And the mayjor reason is task is too much because high, spark need to scheduler too mush task.

@KnightChess
Copy link
Contributor

@bibhu107 and why shuffle data grow, I haven't looked at the code in detail; the following is just my guess. you have too much reducer, so the shuffle data may be need more meta. And on the other head, 0.7 -> 0.14, may be the shuffle's java object attr has change, this alse can cause diff. But I think parallelism is the major problem.

@KnightChess
Copy link
Contributor

@bibhu107 does it can work for you? I misread the stack trace, so this parameter hasn't taken effect in the question stage. The parameter that needs to be set is a different one. If you are insert, try to set hoodie.insert.shuffle.parallelism, if upsert, set hoodie.upsert.shuffle.parallelism

@soumilshah1995
Copy link

@bibhu107
Copy link
Author

Hello @KnightChess, thank you for your suggestions.

Initially, the Adaptive Query Execution (AQE) feature was disabled for the jobs because we were explicitly setting spark.sql.shuffle.partitions . Later, we enabled it using the following configuration:

spark.sql.adaptive.coalescePartitions.enabled=true
spark.sql.adaptive.skewJoin.enabled=true

Additionally, we removed the spark.sql.shuffle.partitions configuration. This change resulted in better job performance. However, we have not yet conducted any load/pressure testing.

We will share the results once we perform load testing. For now, we have moved back to Hudi 0.8 and using Spark3.3.1.

Thank you for raising the PR.

@bibhu107
Copy link
Author

I have one query: Why do we need this PR? I expected Hudi to automatically take the deduced parallelism from Hudi 0.13.

As mentioned in the documentation for hoodie.upsert.shuffle.parallelism, it states:

From version 0.13.0 onwards, Hudi by default automatically uses the parallelism deduced by Spark based on the source data.

@KnightChess
Copy link
Contributor

@bibhu107 hi, this pr is target to imporve the deduced parallelism and more user friendly, in hudi side, AQE can not effort because use rdd directly

@bibhu107
Copy link
Author

I am closing this Issue. Thanks for support @KnightChess

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

No branches or pull requests

4 participants