Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HuggingFaceStream is giving stream not compatible with StreamingTextResponse #2485

Open
patrickm68 opened this issue Jul 30, 2024 · 2 comments

Comments

@patrickm68
Copy link

Description

Previously I was using AI SDK v2.2.37. which was working fine with huggingface untill I had to implement Google Generative AI. I did not wanted to implement legacy Google Generative AI so I updated ai SDK to v3.2.37.

after update, GoogleGenerative AI is working as well as legacy huggingface locally, but when deployed on vercel preview, huggingface is giving error when stream generated from HuggingFaceStream passed to StreamingTextResponse getting 405 Method not Allowed.
Same code works for google and huggingface locally.

Is this bug in package or in vercel?

Code example

import { authOptions } from "@/auth";
import { getServerSession } from "next-auth";
import { HuggingFaceStream, StreamingTextResponse } from "ai";

import { getModelByName } from "@/app/lib/models";
import {
  getGoogleChatResponseAsStream,
  getHuggingfaceChatResponseAsStream,
} from "@/app/lib/ai-providers";
import { textGenerationStream } from "@huggingface/inference";
import { getStreamEvents } from "@/app/lib/ai-providers/stream-events";

export async function POST(req: Request) {
  const token = process.env.HUGGINGFACE_API_KEY;
  const session = await getServerSession(authOptions);

  if (!session) {
    return new Response(null, { status: 401, statusText: "unauthorized" });
  }

  const userId = session?.user.id;

  const {
    messages,
    id: chatId,
    aiModel,
    systemPrompt,
    temperature,
    topK,
    topP,
    repetitionPenalty,
  } = await req.json();

  const lastMsg = messages[messages.length - 1];

  try {
    const modelSpec = getModelByName(aiModel);
    if (!modelSpec) {
      return new Response("Model not found", { status: 400, statusText: "Model not found" });
    }

    const parameters = {
      ...modelSpec.options.parameters,
      systemPrompt:
        systemPrompt !== undefined ? systemPrompt : modelSpec.options.parameters.systemPrompt,
      temperature:
        temperature !== undefined
          ? parseFloat(String(temperature))
          : modelSpec.options.parameters.temperature,
      top_k: topK !== undefined ? parseInt(String(topK)) : modelSpec.options.parameters.top_k,
      top_p: topP !== undefined ? parseFloat(String(topP)) : modelSpec.options.parameters.top_p,
      repetition_penalty:
        repetitionPenalty !== undefined
          ? parseFloat(String(repetitionPenalty))
          : modelSpec.options.parameters.repetition_penalty,
    };
    const prompt = modelSpec.promptConstructor([{ role: "user", content: lastMsg.content }]);

    let stream: ReadableStream;
    if (modelSpec.provider === "huggingface") {
      // stream = getHuggingfaceChatResponseAsStream(prompt, {
      //   userId,
      //   lastMsg,
      //   chatId,
      //   aiModel,
      //   parameters,
      //   token,
      //   otherOptions: modelSpec.options,
      // });
      const response = textGenerationStream({
        inputs: prompt,
        accessToken: token,
        ...modelSpec.options,
        parameters,
      });
      stream = HuggingFaceStream(
        response,
        getStreamEvents(userId, lastMsg, chatId, aiModel, parameters),
      );
    } else if (modelSpec.provider === "google") {
      stream = await getGoogleChatResponseAsStream(prompt, {
        userId,
        lastMsg,
        chatId,
        aiModel,
        parameters,
      });
    } else {
      stream = new ReadableStream();
    }

    return new StreamingTextResponse(stream);
  } catch (error) {
    return new Response("Service unavailable", { status: 500, statusText: JSON.stringify(error) });
  }
}

Additional context

getStreamEvents() is simply returning and object with onStart and onFinish callbacks.

{
    onStart:()=>{},
    onFinish:(completition:string) => {},
}
@lgrammel lgrammel added bug Something isn't working ai/provider labels Aug 1, 2024
@patrickm68
Copy link
Author

is there any thing I can do to resolve the issue?

@lgrammel
Copy link
Collaborator

lgrammel commented Aug 8, 2024

@patrickm68 we need to implement a HuggingFace provider.

@lgrammel lgrammel removed the bug Something isn't working label Aug 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants