Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout error while using Stream Text generation from ai-sdk (Free plan) #1636

Closed
reveurguy opened this issue May 17, 2024 · 4 comments
Closed
Labels

Comments

@reveurguy
Copy link

Description

While using the Stream Text generation from ai-sdk, the function call is getting timed out.
The generation starts and runs for a bit, after that i receive this error and the generation stops.

Screenshot 2024-05-17 at 8 41 54 PM

This is the log from vercel.



image

This is the error console log in production.

Code example

This is the code in /layout.tsx file:

export const dynamic = 'force-dynamic';
export const maxDuration = 60;

This is the code in /page.tsx file:

  async function output() {
    const { output } = await generate(prompt, gpt3Configurations);
    const startTime = Date.now();
    let endTime = 0;
    for await (const delta of readStreamableValue(output)) {
      setGpt3Output((currentGeneration) => `${currentGeneration}${delta}`);
      endTime = Date.now();
    }
    const time = endTime - startTime;
    setGpt3Time(time);
  }

  async function output4() {
    const { output } = await generate4(prompt, gpt4Configurations);
    const startTime = Date.now();
    let endTime = 0;
    for await (const delta of readStreamableValue(output)) {
      setGpt4Output((currentGeneration) => `${currentGeneration}${delta}`);
      endTime = Date.now();
    }
    const time = endTime - startTime;
    setGpt4Time(time);
  }

  async function output4o() {
    const { output } = await generate4o(prompt, gpt4oConfigurations);
    const startTime = Date.now();
    let endTime = 0;
    for await (const delta of readStreamableValue(output)) {
      setGpt4oOutput((currentGeneration) => `${currentGeneration}${delta}`);
      endTime = Date.now();
    }
    const time = endTime - startTime;
    setGpt4oTime(time);
  }

  const handleRun = (e: any) => {
    if (prompt) {
      e.preventDefault();
      Promise.all([
        Promise.resolve(output()),
        Promise.resolve(output4()),
        Promise.resolve(output4o()),
      ]).catch((error) => {
        console.error('An error occurred while running the functions:', error);
        toast.error('An error occurred while running the functions');
      });
    }
  };

This is the code for the generate, generate4, generate4o functions in the /action.ts file:

'use server';

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { createStreamableValue } from 'ai/rsc';

type Config = {
  maxTokens: number;
  temperature: number;
  topP: number;
  presencePenalty: number;
  frequencyPenalty: number;
};

export async function generate(input: string, config: Config) {
  'use server';
  const stream = createStreamableValue('');
  (async () => {
    const { textStream } = await streamText({
      model: openai('gpt-3.5-turbo'),
      prompt: input,
      maxTokens: config.maxTokens,
      temperature: config.temperature,
      topP: config.topP,
      presencePenalty: config.presencePenalty,
      frequencyPenalty: config.frequencyPenalty,
    });
    for await (const delta of textStream) {
      stream.update(delta);
    }
    stream.done();
  })();

  return { output: stream.value };
}

export async function generate4(input: string, config: Config) {
  'use server';
  const stream = createStreamableValue('');
  (async () => {
    const { textStream } = await streamText({
      model: openai('gpt-4'),
      prompt: input,
      maxTokens: config.maxTokens,
      temperature: config.temperature,
      topP: config.topP,
      presencePenalty: config.presencePenalty,
      frequencyPenalty: config.frequencyPenalty,
    });
    for await (const delta of textStream) {
      stream.update(delta);
    }
    stream.done();
  })();

  return { output: stream.value };
}

export async function generate4o(input: string, config: Config) {
  'use server';
  const stream = createStreamableValue('');
  (async () => {
    const { textStream } = await streamText({
      model: openai('gpt-4o'),
      prompt: input,
      maxTokens: config.maxTokens,
      temperature: config.temperature,
      topP: config.topP,
      presencePenalty: config.presencePenalty,
      frequencyPenalty: config.frequencyPenalty,
    });
    for await (const delta of textStream) {
      stream.update(delta);
    }
    stream.done();
  })();

  return { output: stream.value };
}

The handleRun() function is called on click of submit button.

Additional context

No response

@admineral
Copy link

Maybe this helps
https://vercel.com/guides/streaming-from-llm

@lgrammel lgrammel added ai/rsc bug Something isn't working and removed bug Something isn't working labels May 23, 2024
@ElectricCodeGuy
Copy link

export const maxDuration = 60; should be placed in the page.tsx file

@jeremyphilemon
Copy link
Contributor

@ElectricCodeGuy is right, the server actions inherit maxDuration set in the page it's called from, so I would move it into page.tsx

I was able to reproduce the error and setting the max duration in page.tsx fixed it!

@ElectricCodeGuy
Copy link

I created a small example project on how you could implement the new ai/rsc :)
https://github.com/ElectricCodeGuy/SupabaseAuthWithSSR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants