-
Notifications
You must be signed in to change notification settings - Fork 32
Description
Several developers have asked about sending events from Outpost to Amazon S3 - for example, storing each event (or batch of events) as Parquet files in an S3 bucket for later processing.
Current Thinking
We don’t currently plan to add S3 as a destination in the near term. Outpost is focused on routing events to event buses (like Kafka, Google Pub/Sub, or EventBridge), rather than acting as a storage or data lake pipeline.
Here’s why we’re leaning that way:
-
S3 is a better fit for native AWS tools. Services like Kinesis and EventBridge already support direct delivery to S3, and often handle the batching, compression, and schema concerns more effectively.
-
Outpost is not optimized for storage workflows. Supporting formats like Parquet would require Outpost to handle batching, compression, and schema management, which shifts it away from its core purpose as an event delivery gateway.
-
Complexity tradeoff. Adding S3 support would introduce new responsibilities (e.g. storage format logic, delivery guarantees, large file handling) that might be better solved downstream.
That said, we understand that some systems emit both real-time events and large state snapshots, and S3 is a common and affordable destination for those snapshots.
We’re leaving this issue open to track interest and feedback.
If sending data to S3 from Outpost would solve a real problem for you, please add a 👍 or leave a comment describing your use case — especially if it involves batching, file formats (like Parquet or JSONL), or volume requirements.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status