Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-10492][Streaming][Documentation] Update Streaming documentation about rate limiting and backpressure #8656

Closed
wants to merge 1 commit into from

Conversation

tdas
Copy link
Contributor

@tdas tdas commented Sep 8, 2015

No description provided.

@SparkQA
Copy link

SparkQA commented Sep 8, 2015

Test build #42149 has finished for PR 8656 at commit 986cdd6.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

asfgit pushed a commit that referenced this pull request Sep 8, 2015
…ion about rate limiting and backpressure

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #8656 from tdas/SPARK-10492 and squashes the following commits:

986cdd6 [Tathagata Das] Added information on backpressure

(cherry picked from commit 52b24a6)
Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
@asfgit asfgit closed this in 52b24a6 Sep 8, 2015
See the [configuration parameters](configuration.html#spark-streaming)
`spark.streaming.receiver.maxRate` for receivers and `spark.streaming.kafka.maxRatePerPartition`
for Direct Kafka approach. In Spark 1.5, we have introduced a feature called *backpressure* that
eliminate the need to set this rate limit, as Spark Streaming automatically figures out the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: eliminates

@holdenk
Copy link
Contributor

holdenk commented Sep 8, 2015

oh nvm its merged in already, ignore my minor comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants