Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Vercel Functions for Hobby can now run up to 60 seconds #4781

Open
maxduke opened this issue May 26, 2024 · 5 comments
Open
Labels
enhancement New feature or request

Comments

@maxduke
Copy link

maxduke commented May 26, 2024

Problem Description

You'll get a timeout error if no data is returned from the API after about 25 seconds when deploying the application in Vercel Hobby.

Solution Description

Vercel Functions for Hobby can now run up to 60 seconds.
https://vercel.com/changelog/vercel-functions-for-hobby-can-now-run-up-to-60-seconds
I'm wondering if the above change can be implemented.

Alternatives Considered

No response

Additional Context

No response

@maxduke maxduke added the enhancement New feature or request label May 26, 2024
@Dean-YZG
Copy link
Contributor

The timeout is currently set to 60 seconds in nextchat, you might want to check if something else is causing the 25-second prompt to run out, Or do you have any error screenshots? Let's examine the causes of early timeouts

@maxduke
Copy link
Author

maxduke commented May 27, 2024

The timeout is currently set to 60 seconds in nextchat, you might want to check if something else is causing the 25-second prompt to run out, Or do you have any error screenshots? Let's examine the causes of early timeouts

So I think maybe Edge runtime is involved.
According to https://vercel.com/docs/functions/configuring-functions/duration.

You can't configure a maximum duration for functions using the Edge runtime. They can run indefinitely provided they send an initial response within 25 seconds.

@SuiYunsy
Copy link

About api/openai/[...path]:
If using edge, as long as the first byte arrives within 25 seconds, the streaming can continue indefinitely.
If using nodejs, regardless of when the first byte arrives, the total transmission time will be cut off if it exceeds the max-duration.
So the conclusion is to keep api/openai/[...path] as it is——set it as an edge function.

Hk-Gosuto#258 (comment)

@maxduke maxduke closed this as completed May 27, 2024
@SuiYunsy
Copy link

SuiYunsy commented Jun 1, 2024

@maxduke
Could you please reopen the issue? I think there should be a way to avoid the problem of Edge Functions timing out if they don't receive an API response within 25 seconds. I found the following two links on GitHub, and both approaches set up a keep alive heartbeat in the router:

vercel/ai#487 (comment)

As the docs explain, Edge Functions don't have a maximum streaming time once they started streaming. So a possible workaround for this issue could be to start streaming empty strings with a 2 second interval so that we keep the connection alive while the LLM API call loads.

https://github.com/orgs/vercel/discussions/3553#discussioncomment-7131497

Specifically, apart from sending a response within 25 seconds (which I currently do) you actually have to keep sending messages periodically (unclear at what frequency, but 15 seconds resolves the issue) to keep the stream alive.

But I'm a programming newbie and can't really handle the code😂😂😂
I hope some skilled expert can implement the heartbeat to solve the 25s timeout issue. Thanks a lot!

@maxduke maxduke reopened this Jun 1, 2024
@maxduke
Copy link
Author

maxduke commented Jun 1, 2024

@maxduke Could you please reopen the issue? I think there should be a way to avoid the problem of Edge Functions timing out if they don't receive an API response within 25 seconds. I found the following two links on GitHub, and both approaches set up a keep alive heartbeat in the router:

vercel/ai#487 (comment)

As the docs explain, Edge Functions don't have a maximum streaming time once they started streaming. So a possible workaround for this issue could be to start streaming empty strings with a 2 second interval so that we keep the connection alive while the LLM API call loads.

https://github.com/orgs/vercel/discussions/3553#discussioncomment-7131497

Specifically, apart from sending a response within 25 seconds (which I currently do) you actually have to keep sending messages periodically (unclear at what frequency, but 15 seconds resolves the issue) to keep the stream alive.

But I'm a programming newbie and can't really handle the code😂😂😂 I hope some skilled expert can implement the heartbeat to solve the 25s timeout issue. Thanks a lot!

reopened. Thanks a lot for the information. Let’s see.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants