Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-17159][STREAM] Significant speed up for running spark streaming against Object store. #22339

Closed
wants to merge 4 commits into from

Conversation

ScrapCodes
Copy link
Member

@ScrapCodes ScrapCodes commented Sep 5, 2018

What changes were proposed in this pull request?

Original work by Steve Loughran.
Based on #17745.

This is a minimal patch of changes to FileInputDStream to reduce File status requests when querying files. Each call to file status is 3+ http calls to object store. This patch eliminates the need for it, by using FileStatus objects.

This is a minor optimisation when working with filesystems, but significant when working with object stores.

How was this patch tested?

Tests included. Existing tests pass.

…Object store.

Based on apache#17745. Original work by Steve Loughran.

This is a minimal patch of changes to FileInputDStream to reduce File status requests when querying files.

This is a minor optimisation when working with filesystems, but significant when working with object stores.

Change-Id: I269d98902f615818941c88de93a124c65453756e
@SparkQA
Copy link

SparkQA commented Sep 5, 2018

Test build #95706 has finished for PR 22339 at commit 2fba9af.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dongjoon-hyun
Copy link
Member

Hi, @ScrapCodes . Could you do the followings?

  • Update the title to [SPARK-17159][SS]...
  • Remove Please review http://spark.apache.org/contributing.html .... from PR description
  • Share the numbers because the PR title has Significant speed up

@ScrapCodes ScrapCodes changed the title SPARK-17159 Significant speed up for running spark streaming against Object store. [SPARK-17159][SS] Significant speed up for running spark streaming against Object store. Sep 6, 2018
@ScrapCodes ScrapCodes changed the title [SPARK-17159][SS] Significant speed up for running spark streaming against Object store. [SPARK-17159][STREAM] Significant speed up for running spark streaming against Object store. Sep 7, 2018
@dongjoon-hyun
Copy link
Member

Retest this please.

@SparkQA
Copy link

SparkQA commented Sep 13, 2018

Test build #96047 has finished for PR 22339 at commit 2fba9af.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@ScrapCodes
Copy link
Member Author

For numbers, while testing with object store having 50 files/dirs, without this patch it took 130 REST requests for 2 batches to complete and with this patch it took 56 rest requests. So number of rest calls are reduced, and this translates to speedup. How much speed up is dependent on number of files, but for the particular test, I have run, it was 2x.

@ScrapCodes
Copy link
Member Author

Hi @srowen, would you like to take a look? Is there anything I can do, if this patch is missing something? I have tested it thoroughly against an object store.

Copy link
Member

@srowen srowen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If @steveloughran is into it, I think this is OK. I see why it's faster.

val newFiles = directories.flatMap(dir =>
fs.listStatus(dir, newFileFilter).map(_.getPath.toString))
fs.listStatus(dir)
.filter(isNewFile(_, currentTime, modTimeIgnoreThreshold))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think the indent is too deep here?

val timeTaken = clock.getTimeMillis() - lastNewFileFindingTime
logInfo("Finding new files took " + timeTaken + " ms")
logDebug("# cached file times = " + fileToModTime.size)
logInfo(s"Finding new files took $timeTaken ms")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this should be a debug statement. I don't feel strongly about it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was originally @ info, so if it it filled up logs too much there'd be complaints. What's important is that the time to scan is printed, either @ info or debug, so someone can see what's happening. Probably what does need logging @ warn is when the time to scan is greater than the window, or just getting close to it.

override def accept(path: Path): Boolean = fs.getFileStatus(path).isDirectory
}
val directories = fs.globStatus(directoryPath, directoryFilter).map(_.getPath)
val directories = Option(fs.globStatus(directoryPath)).getOrElse(Array.empty[FileStatus])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the .getOrElse could come at the end, but it hardly matters.

@steveloughran
Copy link
Contributor

steveloughran commented Sep 28, 2018

Why the speedups? Comes from that glob filter calling getFileStatus() on every entry, which is is 1-3 HTTP requests and a few hundred millis per call, when instead that can be handled later. As a result, the more files you have in a path, the more time the scan takes, until eventually the scan time > window interval at which point your code is dead.

The other stuff is simply associated optimisations.

Now, I'm obviously happy with this, especially as I seem I getting credit. And it will help speedup working with any store. But I need to warn people: it is not sufficient

The key problem here is: files uploaded by S3 multipart upload get a timestamp on when the upload began, not finished —yet only become visible at the end of the upload. If a caller starts up an upload in window t, and doesn't complete it until window t+1, then it may get missed.

There's not much which can be done here, except in documenting the risk.

What is a good solution? It'd be to use the cloud-infra-providers own event notification mechanism and subscribe to changes in a store. AWS, Azure and GCS all offer something like this.

There's a home for the S3 one of those in spark-kinesis, perhaps. Not got free time to work on it, I'm afraid, but if someone starts coding it, list me on the PR and I'll take a look

@srowen
Copy link
Member

srowen commented Sep 28, 2018

Yeah I agree, I was saying I do think it will speed things up. If it's a non-trivial win it's worthwhile even if it isn't the last optimization here. Is there any downside to this?

@steveloughran
Copy link
Contributor

no, no cost penalties. Slightly lower namenode load too. If you had many, many spark streaming clients scanning directories, HDFS ops teams would eventually get upset. This will postpone the day

Copy link
Member

@srowen srowen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ScrapCodes looks good to me except perhaps the tiny style comment above, and possibly the log statement question

@dongjoon-hyun
Copy link
Member

Retest this please.

@SparkQA
Copy link

SparkQA commented Oct 2, 2018

Test build #96843 has finished for PR 22339 at commit 2fba9af.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 3, 2018

Test build #96886 has finished for PR 22339 at commit dab9bf3.

  • This patch fails to build.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 3, 2018

Test build #96885 has finished for PR 22339 at commit 542872c.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 3, 2018

Test build #96887 has finished for PR 22339 at commit d91c815.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member

srowen commented Oct 5, 2018

Merged to master

@asfgit asfgit closed this in 3ae4f07 Oct 5, 2018
@ScrapCodes ScrapCodes deleted the PR_17745 branch October 5, 2018 05:49
@ScrapCodes
Copy link
Member Author

Thank you @srowen and @steveloughran.

jackylee-ch pushed a commit to jackylee-ch/spark that referenced this pull request Feb 18, 2019
…g against Object store.

## What changes were proposed in this pull request?

Original work by Steve Loughran.
Based on apache#17745.

This is a minimal patch of changes to FileInputDStream to reduce File status requests when querying files. Each call to file status is 3+ http calls to object store. This patch eliminates the need for it, by using FileStatus objects.

This is a minor optimisation when working with filesystems, but significant when working with object stores.

## How was this patch tested?

Tests included. Existing tests pass.

Closes apache#22339 from ScrapCodes/PR_17745.

Lead-authored-by: Prashant Sharma <prashant@apache.org>
Co-authored-by: Steve Loughran <stevel@hortonworks.com>
Signed-off-by: Sean Owen <sean.owen@databricks.com>
zzcclp added a commit to zzcclp/spark that referenced this pull request Sep 20, 2019
…reaming against Object store. apache#22339

Original work by Steve Loughran.
Based on apache#17745.

This is a minimal patch of changes to FileInputDStream to reduce File status requests when querying files. Each call to file status is 3+ http calls to object store. This patch eliminates the need for it, by using FileStatus objects.

This is a minor optimisation when working with filesystems, but significant when working with object stores.
LuciferYang pushed a commit that referenced this pull request Nov 2, 2023
### What changes were proposed in this pull request?
The pr aims to delete `TimeStampedHashMap` and its UT.

### Why are the changes needed?
During Pr #43578, we found that the class `TimeStampedHashMap` is no longer in use. Based on the suggestion, we have removed it. #43578 (comment)

- First time this class `TimeStampedHashMap` be introduced:
b18d708#diff-77b12178a7036c71135074c6ddf7d659e5a69906264d5e3061087e4352e304ed introduced this data structure
- After #22339, this class `TimeStampedHashMap` is only being used in UT `TimeStampedHashMapSuite`.
So, after Spark 3.0, this data structure has not been used by any production code of Spark.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
Pass GA.

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes #43633 from panbingkun/remove_TimeStampedHashMap.

Authored-by: panbingkun <pbk1982@gmail.com>
Signed-off-by: yangjie01 <yangjie01@baidu.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants