Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llm guard #182

Merged
merged 1 commit into from
Jul 10, 2024
Merged

llm guard #182

merged 1 commit into from
Jul 10, 2024

Conversation

gyliu513
Copy link
Owner

@gyliu513 gyliu513 commented Jul 10, 2024

PR Type

Enhancement


Description

  • Added a new script openai-guard.py to demonstrate the use of llm_guard with the OpenAI API.
  • Included instructions for setting up the OPENAI_API_KEY environment variable.
  • Integrated various input scanners (Anonymize, Toxicity, TokenLimit, PromptInjection) and output scanners (Deanonymize, NoRefusal, Relevance, Sensitive).
  • Implemented logic to sanitize prompts and validate responses before and after sending them to the OpenAI API.

Changes walkthrough 📝

Relevant files
Enhancement
openai-guard.py
Add OpenAI guard script with prompt and response validation

llmguard/openai-guard.py

  • Added a script to demonstrate the use of llm_guard with OpenAI API.
  • Included environment variable setup instructions for OPENAI_API_KEY.
  • Integrated input and output scanners for prompt and response
    validation.
  • Implemented prompt sanitization and response validation logic.
  • +52/-0   

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    Summary by CodeRabbit

    • New Features
      • Introduced a new functionality for interacting with the OpenAI API, ensuring secure handling of sensitive information through input and output scanners.

    Copy link

    coderabbitai bot commented Jul 10, 2024

    Warning

    Review failed

    The pull request is closed.

    Walkthrough

    The new file openai-guard.py introduces functionality for securely interacting with the OpenAI API by using input and output scanners to handle sensitive information. It includes the creation of instances for clients and scanners, scanning prompts before API calls, and sanitizing responses received from the API.

    Changes

    File Path Change Summary
    llmguard/openai-guard.py Added OpenAI API interaction with input and output scanners to securely handle sensitive data.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant OpenAI_Guard
        participant OpenAI_Client
        participant Vault
        participant Input_Scanner
        participant Output_Scanner
    
        User->>OpenAI_Guard: Provide prompt
        OpenAI_Guard->>Input_Scanner: Scan prompt
        Input_Scanner->>OpenAI_Guard: Return sanitized prompt, validation results, score
        OpenAI_Guard->>OpenAI_Client: Request completion with sanitized prompt
        OpenAI_Client->>OpenAI_Guard: Return response
        OpenAI_Guard->>Output_Scanner: Scan response
        Output_Scanner->>OpenAI_Guard: Return sanitized response, validation results, score
        OpenAI_Guard->>User: Provide sanitized response
    
    Loading

    Poem

    In the land of code where secrets lie,
    A guardian was born, to soar the sky.
    With scanners keen and vaults so tight,
    It guards our prompts, both day and night.
    An OpenAI shield, secure and bright!
    🌟🔐🐇


    Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

    Share
    Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>.
      • Generate unit testing code for this file.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai generate unit testing code for this file.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai generate interesting stats about this repository and render them as a table.
      • @coderabbitai show all the console.log statements in this repository.
      • @coderabbitai read src/utils.ts and generate unit testing code.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (invoked as PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

    CodeRabbit Configration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    @gyliu513 gyliu513 merged commit cdb5033 into main Jul 10, 2024
    2 of 3 checks passed
    @gyliu513 gyliu513 deleted the llmgurard branch July 10, 2024 19:25
    @github-actions github-actions bot added the enhancement New feature or request label Jul 10, 2024
    Copy link

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 No relevant tests
    🔒 Security concerns

    Sensitive information exposure:
    The script handles sensitive information such as credit card numbers and personal identifiers. It's crucial to ensure that these data are properly sanitized and that the sanitization methods are robust against various types of injection and leakage.

    ⚡ Key issues to review

    Possible Bug:
    The script uses environment variables for API keys, which is generally secure, but there should be additional checks or warnings if the API key is not set, to prevent runtime errors.

    Security Risk:
    The prompt includes sensitive information (e.g., credit card numbers, IP addresses). Even though there is a sanitization step, the initial inclusion of such data in the script might pose a risk if not handled correctly.

    Performance Concern:
    The script processes each prompt and response synchronously. For high throughput or low latency requirements, this might not be optimal.

    Copy link

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Possible issue
    Add error handling for missing API key environment variable

    To improve the robustness of the code, add error handling for the retrieval of the
    OPENAI_API_KEY from the environment. This ensures that the program can gracefully
    handle cases where the API key is not set, and provide a user-friendly error
    message.

    llmguard/openai-guard.py [18]

    -client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    +api_key = os.getenv("OPENAI_API_KEY")
    +if not api_key:
    +    raise ValueError("OPENAI_API_KEY is not set. Please set the environment variable.")
    +client = OpenAI(api_key=api_key)
     
    Suggestion importance[1-10]: 9

    Why: Adding error handling for the API key retrieval is crucial for robustness. It ensures the program can gracefully handle cases where the API key is not set, providing a user-friendly error message and preventing potential runtime errors.

    9
    Maintainability
    Replace exit(1) with raising an exception for better error handling

    Replace the direct use of exit(1) with raising an exception. This change makes the
    code more modular and testable by allowing exceptions to be caught and handled by
    calling functions, rather than exiting the program directly.

    llmguard/openai-guard.py [31]

    -exit(1)
    +raise Exception("Invalid prompt detected.")
     
    Suggestion importance[1-10]: 8

    Why: Replacing exit(1) with raising an exception improves the modularity and testability of the code. It allows exceptions to be caught and handled by calling functions, making the code more maintainable.

    8
    Refactor repeated validation logic into a function for better maintainability

    To enhance code readability and maintainability, consider using a loop to handle the
    repeated logic of checking results_valid values and handling errors in both prompt
    and output validation sections.

    llmguard/openai-guard.py [29-50]

    -if any(results_valid.values()) is False:
    -    print(f"Prompt {prompt} is not valid, scores: {results_score}")
    -    exit(1)
    +def validate_results(results_valid, results_score, text_type, text):
    +    if any(results_valid.values()) is False:
    +        print(f"{text_type} {text} is not valid, scores: {results_score}")
    +        raise Exception(f"Invalid {text_type.lower()} detected.")
    +validate_results(results_valid, results_score, "Prompt", prompt)
     ...
    -if any(results_valid.values()) is False:
    -    print(f"Output {response_text} is not valid, scores: {results_score}")
    -    exit(1)
    +validate_results(results_valid, results_score, "Output", response_text)
     
    Suggestion importance[1-10]: 8

    Why: Refactoring the repeated validation logic into a function enhances code readability and maintainability. It reduces code duplication and centralizes the validation logic, making future updates easier.

    8
    Security
    Improve security by encapsulating the API key retrieval in a function

    Consider using a more secure method to handle sensitive data such as API keys.
    Instead of directly fetching the API key from the environment variable in the global
    scope, use a function to encapsulate the retrieval logic. This approach enhances
    security by limiting the scope of the API key and provides a single point for
    managing access to it.

    llmguard/openai-guard.py [18]

    -client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    +def get_api_key():
    +    return os.getenv("OPENAI_API_KEY")
    +client = OpenAI(api_key=get_api_key())
     
    Suggestion importance[1-10]: 7

    Why: Encapsulating the API key retrieval in a function enhances security by limiting the scope of the API key and provides a single point for managing access to it. However, it is a minor improvement and does not address any critical security issues.

    7

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    1 participant