Skip to content

How can i use moderation api from open ai with vercel ai sdk, there was nothing about this in vercel ai docs #2555

Closed
@Akshmit11

Description

@Akshmit11

Feature Description

I am asking users to enter some text and submit it and I am using vercel ai sdk to generate response from chatgpt to that text, but I also want to check if the text contains any profanity or not. How can I achieve this?

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';



const checkProfanity = async (text: string) => {
  try {
// **what to do about this..->-> open.moderation.create**
    const response = await openai.moderation.create({
      input: text,
    });
    const { results } = response.data;
    return results[0].flagged;
  } catch (error) {
    console.error("Error checking profanity:", error);
    return false;
  }
};


const isProfane = await checkProfanity(text);

    if (isProfane) {
      throw new Error("The text contains inappropriate content. Please modify your text and try again.");
    }

const aiResponse = await generateText({
      model: openai('gpt-3.5-turbo'),
      prompt: `my prompt comes here....

Is there any way of achieving this functionality?

Use Case

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    duplicateThis issue or pull request already exists

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions