Skip to content
This repository has been archived by the owner on Mar 30, 2021. It is now read-only.

How to scale it #49

Open
ankurmitujjain opened this issue May 29, 2015 · 1 comment
Open

How to scale it #49

ankurmitujjain opened this issue May 29, 2015 · 1 comment

Comments

@ankurmitujjain
Copy link

Hi Team,

We are evaluating this connector library, as of now we are sending records in Kinesis at a rate of 3MP/s
and using this library we are saving data to S3.
Connector library works well when data ingestion is slow but as we increase speed, connector library continues with same speed of saving records...
Is there any way we can scale it out or auto scaling?

@darkcrawler01
Copy link

check these parameters -

  1. whats the provisioned kinesis shard count ?
  2. max_records per GetRecords ?
  3. since the records are processed serially, how long does it take for s3 emitter to finish one batch ?

Using these, its possible to compute the max throughput of a shard, then provision as many shards as its required for ur use case. HTH

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants