-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-53961][SQL][TESTS] Fix FileStreamSinkSuite flakiness by using walkFileTree instead of walk
#52671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…g `walkFileTree` instead of `walk`
dongjoon-hyun
commented
Oct 20, 2025
| override def visitFileFailed(file: Path, exc: IOException): FileVisitResult = { | ||
| exc match { | ||
| case _: NoSuchFileException => | ||
| FileVisitResult.CONTINUE |
Member
Author
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's okay to ignore the deleted files because those are supposed to be cleaned up in a success case.
Member
Author
|
The updated test passed. |
HyukjinKwon
approved these changes
Oct 21, 2025
Member
Author
|
Thank you, @HyukjinKwon . Merged to master/4.0/3.5. |
dongjoon-hyun
added a commit
that referenced
this pull request
Oct 21, 2025
…g `walkFileTree` instead of `walk` ### What changes were proposed in this pull request? This PR aims to fix `FileStreamSinkSuite` flakiness by using `walkFileTree` instead of `walk`. ### Why are the changes needed? `Files.walk` is flaky like the following when the directory has a race condition. `walkFileTree` has more robust error handling. https://github.com/apache/spark/blob/2bb73fbdeb19f0a972786d3ea33d3263bf84ab66/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala#L543-L547 ``` [info] - cleanup complete but invalid output for aborted job *** FAILED *** (438 milliseconds) [info] java.io.UncheckedIOException: java.nio.file.NoSuchFileException: ***/spark-4c7ad10b-5848-45d7-ba43-dae4020ad011/output #output/part-00007-e582f3e3-87e3-40fa-8269-7fac9b545775-c000.snappy.parquet [info] at java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) [info] at java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) [info] at java.base/java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1855) [info] at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:292) [info] at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) [info] at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:169) [info] at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:298) [info] at java.base/java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) [info] at scala.collection.convert.JavaCollectionWrappers$JIteratorWrapper.hasNext(JavaCollectionWrappers.scala:46) [info] at scala.collection.Iterator$$anon$6.hasNext(Iterator.scala:480) [info] at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583) [info] at scala.collection.mutable.Growable.addAll(Growable.scala:61) [info] at scala.collection.mutable.Growable.addAll$(Growable.scala:57) [info] at scala.collection.immutable.SetBuilderImpl.addAll(Set.scala:405) [info] at scala.collection.immutable.Set$.from(Set.scala:362) [info] at scala.collection.IterableOnceOps.toSet(IterableOnce.scala:1469) [info] at scala.collection.IterableOnceOps.toSet$(IterableOnce.scala:1469) [info] at scala.collection.AbstractIterator.toSet(Iterator.scala:1306) [info] at org.apache.spark.sql.streaming.FileStreamSinkSuite.$anonfun$new$52(FileStreamSinkSuite.scala:537) ``` ### Does this PR introduce _any_ user-facing change? No, this is a test case change. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #52671 from dongjoon-hyun/SPARK-53961. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 8430dbf) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
dongjoon-hyun
added a commit
that referenced
this pull request
Oct 21, 2025
…g `walkFileTree` instead of `walk` This PR aims to fix `FileStreamSinkSuite` flakiness by using `walkFileTree` instead of `walk`. `Files.walk` is flaky like the following when the directory has a race condition. `walkFileTree` has more robust error handling. https://github.com/apache/spark/blob/2bb73fbdeb19f0a972786d3ea33d3263bf84ab66/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala#L543-L547 ``` [info] - cleanup complete but invalid output for aborted job *** FAILED *** (438 milliseconds) [info] java.io.UncheckedIOException: java.nio.file.NoSuchFileException: ***/spark-4c7ad10b-5848-45d7-ba43-dae4020ad011/output #output/part-00007-e582f3e3-87e3-40fa-8269-7fac9b545775-c000.snappy.parquet [info] at java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) [info] at java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) [info] at java.base/java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1855) [info] at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:292) [info] at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) [info] at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:169) [info] at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:298) [info] at java.base/java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) [info] at scala.collection.convert.JavaCollectionWrappers$JIteratorWrapper.hasNext(JavaCollectionWrappers.scala:46) [info] at scala.collection.Iterator$$anon$6.hasNext(Iterator.scala:480) [info] at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583) [info] at scala.collection.mutable.Growable.addAll(Growable.scala:61) [info] at scala.collection.mutable.Growable.addAll$(Growable.scala:57) [info] at scala.collection.immutable.SetBuilderImpl.addAll(Set.scala:405) [info] at scala.collection.immutable.Set$.from(Set.scala:362) [info] at scala.collection.IterableOnceOps.toSet(IterableOnce.scala:1469) [info] at scala.collection.IterableOnceOps.toSet$(IterableOnce.scala:1469) [info] at scala.collection.AbstractIterator.toSet(Iterator.scala:1306) [info] at org.apache.spark.sql.streaming.FileStreamSinkSuite.$anonfun$new$52(FileStreamSinkSuite.scala:537) ``` No, this is a test case change. Pass the CIs. No. Closes #52671 from dongjoon-hyun/SPARK-53961. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 8430dbf) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
zifeif2
pushed a commit
to zifeif2/spark
that referenced
this pull request
Nov 14, 2025
…g `walkFileTree` instead of `walk` ### What changes were proposed in this pull request? This PR aims to fix `FileStreamSinkSuite` flakiness by using `walkFileTree` instead of `walk`. ### Why are the changes needed? `Files.walk` is flaky like the following when the directory has a race condition. `walkFileTree` has more robust error handling. https://github.com/apache/spark/blob/5d8b4039b78b277fff709a0452423a16cefad20d/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala#L543-L547 ``` [info] - cleanup complete but invalid output for aborted job *** FAILED *** (438 milliseconds) [info] java.io.UncheckedIOException: java.nio.file.NoSuchFileException: ***/spark-4c7ad10b-5848-45d7-ba43-dae4020ad011/output #output/part-00007-e582f3e3-87e3-40fa-8269-7fac9b545775-c000.snappy.parquet [info] at java.base/java.nio.file.FileTreeIterator.fetchNextIfNeeded(FileTreeIterator.java:87) [info] at java.base/java.nio.file.FileTreeIterator.hasNext(FileTreeIterator.java:103) [info] at java.base/java.util.Spliterators$IteratorSpliterator.tryAdvance(Spliterators.java:1855) [info] at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.lambda$initPartialTraversalState$0(StreamSpliterators.java:292) [info] at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.fillBuffer(StreamSpliterators.java:206) [info] at java.base/java.util.stream.StreamSpliterators$AbstractWrappingSpliterator.doAdvance(StreamSpliterators.java:169) [info] at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.tryAdvance(StreamSpliterators.java:298) [info] at java.base/java.util.Spliterators$1Adapter.hasNext(Spliterators.java:681) [info] at scala.collection.convert.JavaCollectionWrappers$JIteratorWrapper.hasNext(JavaCollectionWrappers.scala:46) [info] at scala.collection.Iterator$$anon$6.hasNext(Iterator.scala:480) [info] at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583) [info] at scala.collection.mutable.Growable.addAll(Growable.scala:61) [info] at scala.collection.mutable.Growable.addAll$(Growable.scala:57) [info] at scala.collection.immutable.SetBuilderImpl.addAll(Set.scala:405) [info] at scala.collection.immutable.Set$.from(Set.scala:362) [info] at scala.collection.IterableOnceOps.toSet(IterableOnce.scala:1469) [info] at scala.collection.IterableOnceOps.toSet$(IterableOnce.scala:1469) [info] at scala.collection.AbstractIterator.toSet(Iterator.scala:1306) [info] at org.apache.spark.sql.streaming.FileStreamSinkSuite.$anonfun$new$52(FileStreamSinkSuite.scala:537) ``` ### Does this PR introduce _any_ user-facing change? No, this is a test case change. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#52671 from dongjoon-hyun/SPARK-53961. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit a26c3b7) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This PR aims to fix
FileStreamSinkSuiteflakiness by usingwalkFileTreeinstead ofwalk.Why are the changes needed?
Files.walkis flaky like the following when the directory has a race condition.walkFileTreehas more robust error handling.spark/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala
Lines 543 to 547 in 2bb73fb
Does this PR introduce any user-facing change?
No, this is a test case change.
How was this patch tested?
Pass the CIs.
Was this patch authored or co-authored using generative AI tooling?
No.