Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAIModerationChain with OpenAI SDK >= v1.0.0 Broken #13685

Closed
1 of 14 tasks
MrNickBreen opened this issue Nov 21, 2023 · 9 comments
Closed
1 of 14 tasks

OpenAIModerationChain with OpenAI SDK >= v1.0.0 Broken #13685

MrNickBreen opened this issue Nov 21, 2023 · 9 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@MrNickBreen
Copy link

System Info

It appears that OpenAI's SDK v1.0.0 update introduced some needed migrations.

Running the Langchain OpenAIModerationChain with OpenAI SDK >= v1.0.0 provides the following error:

You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

After briefly reading the mitigation steps I believe the suggested migration is from openai.Moderation.create() -> client.moderations.create().

I believe the validate_environment of OpenAIModerationChain will want updating from values["client"] = openai.Moderation to using the recommended client.moderations syntax. (https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html#OpenAIModerationChain)

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

With OpenAI SDK >= v1.0.0 try to use OpenAIModerationChain to moderate a piece of content.

Error appears:

You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

Expected behavior

When using OpenAIModerationChain with OpenAI SDK >= v1.0.0 I expect the chain to properly moderate content and not fail with an error.

Copy link

dosubot bot commented Nov 21, 2023

🤖

Hello @MrNickBreen,

I'm Dosu, an automated helper bot here to assist you with your issues, questions, and contributions related to the LangChain repository. While we wait for a human maintainer, I'm here to provide initial support. I'm looking forward to helping you resolve your issue!

Thank you for bringing this to our attention. It seems that the OpenAI SDK v1.0.0 update has indeed introduced some changes that are causing compatibility issues with the OpenAIModerationChain in the LangChain framework.

Based on the error message you provided, the issue seems to be with the deprecated openai.Moderation which is no longer supported in OpenAI SDK v1.0.0 and above. The suggested migration is indeed from openai.Moderation.create() to client.moderations.create().

To resolve this issue, you can modify the validate_environment method and _call method in the OpenAIModerationChain class as follows:

    @root_validator()
    def validate_environment(cls, values: Dict) -> Dict:
        """Validate that api key and python package exists in environment."""
        openai_api_key = get_from_dict_or_env(
            values, "openai_api_key", "OPENAI_API_KEY"
        )
        openai_organization = get_from_dict_or_env(
            values,
            "openai_organization",
            "OPENAI_ORGANIZATION",
            default="",
        )
        try:
            import openai

            openai.api_key = openai_api_key
            if openai_organization:
                openai.organization = openai_organization
            values["client"] = openai.ContentFilter
        except ImportError:
            raise ImportError(
                "Could not import openai python package. "
                "Please install it with `pip install openai`."
            )
        return values

    def _call(
        self,
        inputs: Dict[str, str],
        run_manager: Optional[CallbackManagerForChainRun] = None,
    ) -> Dict[str, str]:
        text = inputs[self.input_key]
        results = self.client.create(prompt=text)
        output = this._moderate(text, results["result"])
        return {self.output_key: output}

In the validate_environment method, openai.Moderation is replaced with openai.ContentFilter to use the new content filter endpoint. In the _call method, self.client.create(text) is replaced with self.client.create(prompt=text) to match the new API's parameter name, and results["results"][0] is replaced with results["result"] to match the new API's response structure.

Please note that this is a suggested solution based on the information provided and the current state of the LangChain codebase. It's always a good idea to thoroughly test the changes in your local environment to ensure everything works as expected.

Let me know if this helps or if you need further assistance.

Sources

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Nov 21, 2023
@MrNickBreen
Copy link
Author

dosubot I think you have the right idea, but I believe we still want to use the moderation API and not the "content filter endpoint".

Copy link

dosubot bot commented Nov 21, 2023

🤖

Based on the information you've provided and the code in the LangChain repository, you can modify the validate_environment and _call methods in the OpenAIModerationChain class to use the new client.moderations.create() syntax as follows:

In the validate_environment method, you should assign openai itself to values["client"] instead of openai.Moderation. This is due to the deprecation of the Moderation class in OpenAI SDK v1.0.0 and above. The moderation functionality is now accessed directly from the openai client.

Here's how the modified validate_environment method would look:

@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
    """Validate that api key and python package exists in environment."""
    openai_api_key = get_from_dict_or_env(
        values, "openai_api_key", "OPENAI_API_KEY"
    )
    openai_organization = get_from_dict_or_env(
        values,
        "openai_organization",
        "OPENAI_ORGANIZATION",
        default="",
    )
    try:
        import openai

        openai.api_key = openai_api_key
        if openai_organization:
            openai.organization = openai_organization
        values["client"] = openai
    except ImportError:
        raise ImportError(
            "Could not import openai python package. "
            "Please install it with `pip install openai`."
        )
    return values

In the _call method, you should replace the call to self.client.create(text) with self.client.moderations.create({"document": {"content": text}}). This is the new syntax for creating a moderation request in OpenAI SDK v1.0.0 and above.

Here's how the modified _call method would look:

def _call(
    self,
    inputs: Dict[str, str],
    run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
    text = inputs[self.input_key]
    results = self.client.moderations.create({"document": {"content": text}})
    output = self._moderate(text, results["results"][0])
    return {self.output_key: output}

Please note that the structure of the results dictionary returned by client.moderations.create() might have changed in the new version of the SDK. You might need to adjust the indexing in the _moderate method call accordingly.

Unfortunately, I couldn't find the exact structure of the results dictionary in the repository or in the OpenAI SDK documentation. You might need to print the results dictionary to the console to inspect its structure.

I hope this helps! If you have any further questions or run into any issues, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@luc-kalaora
Copy link

luc-kalaora commented Nov 28, 2023

It will be great to modify the code so we can use Moderation with open-ai >= 1.0 without override methods

@mrchaos
Copy link

mrchaos commented Nov 28, 2023

For open-ai >= 1.0

 @root_validator()
    def validate_environment(cls, values: Dict) -> Dict:
        """Validate that api key and python package exists in environment."""
        openai_api_key = get_from_dict_or_env(
            values, "openai_api_key", "OPENAI_API_KEY"
        )
        openai_organization = get_from_dict_or_env(
            values,
            "openai_organization",
            "OPENAI_ORGANIZATION",
            default="",
        )
        try:
            from openai import OpenAI                                        # <--- changed

            client = OpenAI()                                                           # <-- changed

            values["api_key"] = client.api_key                                 # <-- changed
            if openai_organization:
                # TODO: The 'openai.organization' option isn't read in the client API. You will need to pass it when you instantiate the client, e.g. 'OpenAI(organization=openai_organization)'
                values["openai_organization"] = openai_organization                   # <--- changed
            values["client"] = client                                                                        # <--- changed
        except ImportError:
            raise ImportError(
                "Could not import openai python package. "
                "Please install it with `pip install openai`."
            )
        return values

    @property
    def input_keys(self) -> List[str]:
        """Expect input key.

        :meta private:
        """
        return [self.input_key]

    @property
    def output_keys(self) -> List[str]:
        """Return output key.

        :meta private:
        """
        return [self.output_key]

    def _moderate(self, text: str, results: dict) -> str:
        if results.flagged:                                                                           # <--- changed
            error_str = "Text was found that violates OpenAI's content policy."
            if self.error:
                raise ValueError(error_str)
            else:
                return error_str
        return text

    def _call(
        self,
        inputs: Dict[str, str],
        run_manager: Optional[CallbackManagerForChainRun] = None,
    ) -> Dict[str, str]:
        text = inputs[self.input_key]
        results = self.client.moderations.create(input=text)   #                           <--- changed
        output = self._moderate(text, results.results[0])   #                                 <--- changed
        return {self.output_key: output}

@luc-kalaora
Copy link

luc-kalaora commented Nov 28, 2023

Thanks mrchaos for your proposition.
In validate_environment you should replace client = OpenAI() by client = OpenAI(api_key = openai_api_key)
I still prefer if langchain integrate a new code

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Feb 27, 2024
Copy link

dosubot bot commented Feb 27, 2024

🤖

Hi, @MrNickBreen,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, the issue involves compatibility issues with the Langchain OpenAIModerationChain due to the OpenAI SDK v1.0.0 update. The suggested migration involves updating the codebase to use the 1.0.0 interface, specifically changing openai.Moderation.create() to client.moderations.create(). Dosu-bot provided a detailed response with code modifications to resolve the issue, and you expressed the preference to continue using the moderation API. Other users, such as luc-kalaora and mrchaos, also contributed suggestions and code modifications to address the issue.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and cooperation.

@mattflo
Copy link
Contributor

mattflo commented Mar 5, 2024

This is fixed by this PR which also includes an async implementation: #18537

@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Mar 5, 2024
ccurme added a commit that referenced this issue May 10, 2024
…8537)

Thank you for contributing to LangChain!

## PR title
lancghain[patch]: fix `OpenAIModerationChain` and implement async

## PR message
Description: fix `OpenAIModerationChain` and implement async

Issues: 
- #18533 
- #13685

Dependencies: none
Twitter handle: mattflo


## Add tests and docs
 
Existing documentation is broken:
https://python.langchain.com/docs/guides/safety/moderation


- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Emilia Katari <emilia@outpace.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
ccurme added a commit that referenced this issue May 10, 2024
…8537)

Thank you for contributing to LangChain!

## PR title
lancghain[patch]: fix `OpenAIModerationChain` and implement async

## PR message
Description: fix `OpenAIModerationChain` and implement async

Issues: 
- #18533 
- #13685

Dependencies: none
Twitter handle: mattflo


## Add tests and docs
 
Existing documentation is broken:
https://python.langchain.com/docs/guides/safety/moderation


- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Emilia Katari <emilia@outpace.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
@ccurme
Copy link
Collaborator

ccurme commented May 10, 2024

Should be fixed in langchain 0.1.20. Closing but please let me know if the issue is not resolved and will re-open.

@ccurme ccurme closed this as completed May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants