Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue-based processing #9

Open
Tracked by #1
sam-lippert opened this issue Aug 26, 2023 · 2 comments
Open
Tracked by #1

Queue-based processing #9

sam-lippert opened this issue Aug 26, 2023 · 2 comments

Comments

@sam-lippert
Copy link
Contributor

No description provided.

@sam-lippert sam-lippert mentioned this issue Aug 26, 2023
@sam-lippert sam-lippert self-assigned this Aug 26, 2023
@sam-lippert
Copy link
Contributor Author

@nathanclevenger When the GPT Consumer class finishes processing the message, where should it be stored? If the initial HTTP request is async and returns a 201 or 202, then where should I get the data after it's returned from OpenAI?

@nathanclevenger
Copy link
Member

@sam-lippert we should probably offer 2 or 3 alternatives ... if the request is an async process, you should be able to specify:
a) a webhook or callback URL (which we should facilitate via a DO and/or Webhook queue)
b) a nextQueue which could be a Cloudflare Workers Queue or Kafka Queue via the Kafka.do abstract
c) a poll based approach where the client can call to get the status ... for persisting, we could just write it to a DO (since we should be making all of the requests to OpenAI via a DO anyway to ensure the request isn't lost) ... which then also allows us to continue a given GPT conversation by re-requesting the completion of an input or adding another user or function message for the next completion

@sam-lippert sam-lippert removed their assignment Nov 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants