Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement QueueConsumer helper #2

Open
Tracked by #1
sam-lippert opened this issue Aug 25, 2023 · 2 comments
Open
Tracked by #1

Implement QueueConsumer helper #2

sam-lippert opened this issue Aug 25, 2023 · 2 comments

Comments

@sam-lippert
Copy link
Contributor

No description provided.

@sam-lippert sam-lippert mentioned this issue Aug 25, 2023
@sam-lippert
Copy link
Contributor Author

@nathanclevenger I've made some updates to consumer.js. I have a few questions about the rest of the implementation:

  1. How are the default export handlers used?
  2. The helper method sets alarm on the workerClass object. Should this be on the DO?
  3. How do we handle retry()? The DO alarm should already retry on fault.

@nathanclevenger
Copy link
Member

nathanclevenger commented Aug 25, 2023

@sam-lippert

  1. Are you referring to the library or a project consuming the library? Essentially, both need to export default all of the necessary functions for the library to run on workers ... so for this library, fetch, scheduled, and queue all need to be on the default export, and the DO class must also be exported for the DO to work. The DO not only should invoke queue on message received events, but by exposing it on the default export, a consumer can be created that can received both Kafka events and Cloudflare Queue events with the same handler.

  2. yes, alarm should be on the DO ... given how queue will run on the worker if invoked by Cloudflare Queues, but run on the DO when invoked by the DO that is polling Kafka, there needs to be some level of encapsulation / code sharing where most of the logic needs to live in both the worker and the DO

  3. Yes, if an exception is thrown while a DO is executing alarm we will get an automatic logarithmic backoff of retries. I think retryAll makes sense to throw and use that logic (at least at first, eventually we would need to handle this logic to allow for the number of retries to be configurable and allow more than 4 retries over longer than a 1 or 2 min time period) ... but for retry where some items may succeed, and only one say fails and needs retry, we probably need to think through how to properly handle that logic in Kafka. For example, imagine a recursive GPT function - where 1 initial prompt gets results that are then recursively called 50 times ... with GPT4 it would be quite expected to regularly get 1-3 errors out of those 50 requests ... but we wouldn't want to retry all 50, just the few that failed, because retrying everything in that case could cost over $1 of unnecessary waste ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants