Combine messages from multiple Eventide streams into an aggregate stream
Start an aggregation of the events from three categories, productCatalog
, productInventory
, and productPricing
into productAggregate
:
AggregateStreams.start(['productCatalog', 'productInventory', 'productPricing'], 'productAggregate')
The aggregation will run in a background thread; to keep the ruby process running while the thread remains active, start the aggregation from component-host
:
module SomeAggregator
module Start
def self.call
AggregateStreams.start(['productCatalog', 'productInventory', 'productPricing'], 'productAggregate')
end
end
end
ComponentHost.start('aggregate-streams-example') do |host|
host.register(SomeAggregator::Start)
end
AggregateStreams.start(…)
accepts an optional block argument which can transform the events as they are copied to the aggregate stream.
Sometimes multiple input streams may have message types that coincide with one another. To disambiguate, set a different message_type
property on the message data passed to the block:
AggregateStreams.start(['someStream', 'otherStream'], 'someAggregate') do |message_data, input_category|
if message_data.type == 'Initiated'
if input_category == 'productCatalog'
message_data.type = 'CatalogInitiated'
elsif input_category == 'productPricing'
message_data.type = 'PricingInitiated'
end
end
message_data
end
To avoid copying some messages from the input streams to the output, have the #transform
block return nil or false:
AggregateStreams.start(['someCategory', 'otherCategory'], 'someAggregation') do |message_data|
return nil if message_data.type == 'SomeUnimportantEvent'
message_data
end
Sometimes it is desirable to aggregate events into a different message store. Use #writer_session
to specify a different session:
settings = MessageStore::Postgres::Settings.build
writer_session = MessageStore::Postgres::Session.build(settings: settings)
AggregateStreams.start(['someCategory', 'otherCategory'], 'someAggregation', writer_session: writer_session)
Periodically, the aggregation needs to record snapshot data. This interval can be varied via the snapshot_interval
argument:
snapshot_interval = 1000
AggregateStreams.start(['someCategory', 'otherCategory'], 'someAggregation', snapshot_interval: snapshot_interval)
The aggregate_streams
library is released under the MIT License.