-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix NPE in WriteBufferManager.addRecord #296
Conversation
Codecov Report
@@ Coverage Diff @@
## master #296 +/- ##
=========================================
Coverage 60.10% 60.11%
- Complexity 1413 1414 +1
=========================================
Files 175 175
Lines 9082 9084 +2
Branches 872 873 +1
=========================================
+ Hits 5459 5461 +2
Misses 3331 3331
Partials 292 292
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
if (value != null) { | ||
serializeStream.writeValue(value, ClassTag$.MODULE$.apply(value.getClass())); | ||
} else { | ||
serializeStream.writeValue(null, null); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it consistent with spark logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Nice catch.
Thanks for your contribution. 300th commit!
### What changes were proposed in this pull request? To handle the NPE of record value to be consistent with Spark ### Why are the changes needed? Fix NPE: ``` 22/11/03 03:36:08 ERROR Executor: Exception in task 4.0 in stage 1.0 (TID 5) java.lang.NullPointerException at org.apache.spark.shuffle.writer.WriteBufferManager.addRecord(WriteBufferManager.java:116) at org.apache.spark.shuffle.writer.RssShuffleWriter.write(RssShuffleWriter.java:152) at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at org.apache.spark.scheduler.Task.run(Task.scala:131) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? UTs
…alue is null (#666) ### What changes were proposed in this pull request? keep consistent with vanilla spark when key or value is null ### Why are the changes needed? Fix: #665 The PR of #296 has fixed the bug of NPE of value. But in some cases of production, the key also could be null. So this corner case also should keep consistent with vanilla spark. And to prevent this logic from reverting, the corresponding tests have been attached. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? 1. Unit tests
…alue is null (#666) ### What changes were proposed in this pull request? keep consistent with vanilla spark when key or value is null ### Why are the changes needed? Fix: #665 The PR of #296 has fixed the bug of NPE of value. But in some cases of production, the key also could be null. So this corner case also should keep consistent with vanilla spark. And to prevent this logic from reverting, the corresponding tests have been attached. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? 1. Unit tests
…y or value is null (apache#666) ### What changes were proposed in this pull request? keep consistent with vanilla spark when key or value is null ### Why are the changes needed? Fix: apache#665 The PR of apache#296 has fixed the bug of NPE of value. But in some cases of production, the key also could be null. So this corner case also should keep consistent with vanilla spark. And to prevent this logic from reverting, the corresponding tests have been attached. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? 1. Unit tests
What changes were proposed in this pull request?
Fix NPE:
Why are the changes needed?
Does this PR introduce any user-facing change?
How was this patch tested?