Skip to content
Concurrent and multi-stage data ingestion and data processing with Elixir
Branch: master
Clone or download
josevalim Rename batcher_key to batcher and partition to batch_key (#56)
Because we may have multiple partitions along the way,
it is probably best to distance ourselves from the
"partition" name. Instead, `batch_key` is used, as it
is one of the factor for specifying a batch beyond the
batch_size and batch_timeout.

To avoid  confusion, batcher_key in `%BatchInfo{}` was
rename to batcher (to mirror `put_batcher`) and `batcher_pid`
was removed for now.
Latest commit a5b63bf Mar 15, 2019


Build concurrent and multi-stage data ingestion and data processing pipelines with Elixir. It allows developers to consume data efficiently from different sources, such as Amazon SQS, RabbitMQ, and others.

Documentation can be found at

Built-in features

Broadway takes the burden of defining concurrent GenStage topologies and provide a simple configuration API that automatically defines concurrent producers, concurrent processing, batch handling, and more, leading to both time and cost efficient ingestion and processing of data.

  • Back-pressure
  • Automatic acknowledgements at the end of the pipeline
  • Batching
  • Automatic restarts in case of failures
  • Graceful shutdown
  • Built-in testing
  • Partitioning
  • Rate-limiting (TODO)
  • Statistics / Metrics (TODO)
  • Back-off (TODO)


Add :broadway to the list of dependencies in mix.exs:

def deps do
    {:broadway, "~> 0.1.0"}

A quick example: SQS integration

Assuming you have added broadway_sqs as a dependency and configured your SQS credentials accordingly, you can consume Amazon SQS events in only 20 LOCs:

defmodule MyBroadway do
  use Broadway

  alias Broadway.Message

  def start_link(_opts) do
      name: __MODULE__,
      producers: [
        sqs: [
          module: {BroadwaySQS.Producer, queue_name: "my_queue"}
      processors: [
        default: [stages: 50]
      batchers: [
        s3: [stages: 5, batch_size: 10, batch_timeout: 1000]

  def handle_message(_processor_name, message, _context) do
    |> Message.update_data(&process_data/1)
    |> Message.put_batcher(:s3)

  def handle_batch(:s3, messages, _batch_info, _context) do
    # Send batch of messages to S3

  defp process_data(data) do
    # Do some calculations, generate a JSON representation, process images.

Once your Broadway module is defined, you just need to add it as a child of your application supervision tree as {MyBroadway, []}.

API reference, examples, how tos and more at

Comparison to Flow

You may also be interested in Flow by Plataformatec. Both Broadway and Flow are built on top of GenStage. Flow is a more general and powerful abstraction than Broadway that focuses on data as a whole, providing features like aggregation, joins, windows, etc. Broadway focuses on events and on operational features, such as metrics, automatic acknowledgements, failure handling, and so on.


Copyright 2019 Plataformatec

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

You can’t perform that action at this time.