-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Netty performance tracking #1161
Comments
Could you enable |
jerqi
added a commit
to jerqi/incubator-uniffle
that referenced
this issue
Aug 21, 2023
This is only for hdfs. |
Another problem: the remote fetch from localfile by netty is unstable, compared with grpc, it costs too much time. |
zuston
pushed a commit
that referenced
this issue
Aug 22, 2023
…buffer len (#1162) ### What changes were proposed in this pull request? If we use the off heap memory and then we use the method `getData`, we will copy the off heap memory to heap data memory. So we should avoid using it in the Netty mode. ### Why are the changes needed? Fix: #1161 ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Code Review
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Netty Performance tracking
sub tasks
Benchmark
Tested with grpc and netty.
Environment
Software: Uniffle master / Hadoop 3.2.2 / Spark 3.1.2
Hardware: Machine 96 cores, 512G memory, 1T * 4 SSD, network bandwidth 8GB/s
Hadoop Yarn Cluster: 1 * ResourceManager + 40 * NodeManager, every machine 1T * 4 SSD
Uniffle Cluster: 1 * Coordinator + 5 * Shuffle Server, every machine 1T * 4 SSD
Configuration
spark's conf
uniffle grpc-based server's conf
uniffle netty-based server's conf
report
And I found the spark executor with netty uniffle gc time is higher than grpc based.
The text was updated successfully, but these errors were encountered: