You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the size of our dynamodb table makes exports to redshift very hard. Also, we re-export the whole DB every time.
What we should do instead is to generate the tablename based on the current data. That assumes that the tables are created in advance but that is doable.
Tables can also be archived to S3 when they are no longer needed for read access.
@dagingaa how would that work with your ES export? I'm thinking of two tables alternating regularly. Can you do streams from both tables and put them into the same thing?
Should work as long as the data stays the same and is only written once. We just have to set up two lambda listeners on the stream events for each one, and it will pipe it in as if it were one.
the size of our dynamodb table makes exports to redshift very hard. Also, we re-export the whole DB every time.
What we should do instead is to generate the tablename based on the current data. That assumes that the tables are created in advance but that is doable.
Tables can also be archived to S3 when they are no longer needed for read access.
WDYT @ggarber?
The text was updated successfully, but these errors were encountered: