Skip to content
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.

Add concurrency control parameter to Triggers #127

Open
jmcx opened this issue Mar 9, 2023 · 1 comment
Open

Add concurrency control parameter to Triggers #127

jmcx opened this issue Mar 9, 2023 · 1 comment
Assignees

Comments

@jmcx
Copy link

jmcx commented Mar 9, 2023

Use case

There are cases in which you need to control the maximum number of concurrent in-flight events being sent to a target.

For example, 1000 messages land on an SQS queue in one go. The SQS source consumes these messages as fast as it can. The goal is to deliver them to a Service, but the service can only consume at most 10 messages in parallel.

Solution

One idea is to implement a concurrency control on Triggers, such that I can specify the maximum number of non-acknowledged in-flight requests at any given time. Once the limit is reached, the Trigger waits for an event to either be acknowledged by the target or dropped before starting to deliver a new event. This would apply to MemoryBroker and RedisBroker, with different fault tolerance guarantees for each.

The benefit of this solution is that by being on the broker, it can be used regardless of which source connector you're using. But, it requires necessarily using a TriggerMesh broker.

An example of the Trigger configuration could be something like this, which includes the new maxConcurrency parameter that I've tentatively added to the delivery section, alongside retries and dead-letter sink:

apiVersion: eventing.triggermesh.io/v1alpha1
kind: Trigger
metadata:
  name: broker-to-service
spec:
  broker:
    group: eventing.triggermesh.io
    kind: MemoryBroker
    name: mybroker
  target:
    ref:
      apiVersion: v1
      kind: Service
      name: my-service
  delivery:
    maxConcurrency: 10
    deadLetterSink:
      ref:
          apiVersion: v1
          kind: Service
          name: dead-letter-service
    backoffDelay: "PT0.5S"     # ISO8601 duration
    backoffPolicy: exponential # exponential or linear
    retry: 2

An alternative solution would be to implement this on the source connectors themselves, or both, but this is more costly as it requires updates to all source components to reach a consistent feature set.

@shabemdadi
Copy link

This is exactly what we are looking for right now with our SQS setup! Was bummed that there wasn't an equivalent to https://knative.dev/docs/serving/autoscaling/concurrency/ in the knative eventing framework

Found this rate limiter option here for cloud events: https://docs.triggermesh.io/1.25/sources/cloudevents/#configuring-rate-limiter-optional - but wasn't sure if that could apply to the SQS source https://docs.triggermesh.io/1.25/sources/awssqs/

and also found some discussion where people were leveraging a Kafka Broker with an SQS Source so that they can play around with Kafka parallelism configs (e.g. partitioning) to mimic concurrency settings - but seems much more complex

Is there any timeframe for when we could expect to see this feature implemented and available?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants