Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for gpt-4-turbo-preview model #820

Merged

Conversation

riya-amemiya
Copy link
Contributor

@riya-amemiya riya-amemiya commented Mar 26, 2024

User description

Use gpt-4-turbo-preview to ensure the latest version is always used.

For those who want to be automatically upgraded to new GPT-4 Turbo preview versions, we are also introducing a new gpt-4-turbo-preview model name alias, which will always point to our latest GPT-4 Turbo preview model.

https://openai.com/blog/new-embedding-models-and-api-updates#updated-gpt-4-turbo-preview


Type

enhancement


Description

  • Added gpt-4-turbo-preview model with a token limit of 128000 to the model token limits configuration.
  • Updated the default model settings in configuration.toml to use gpt-4-turbo-preview for both model and model_turbo fields.

Changes walkthrough

Relevant files
Enhancement
__init__.py
Add Support for GPT-4 Turbo Preview Model                               

pr_agent/algo/init.py

  • Added support for the gpt-4-turbo-preview model with a token limit of
    128000.
  • +1/-0     
    Configuration changes
    configuration.toml
    Update Default Model Settings to GPT-4 Turbo Preview         

    pr_agent/settings/configuration.toml

  • Updated the default model and model_turbo settings to
    gpt-4-turbo-preview.
  • +2/-2     

    PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @codiumai-pr-agent-pro codiumai-pr-agent-pro bot added the enhancement New feature or request label Mar 26, 2024
    Copy link
    Contributor

    PR Description updated to latest commit (d064a35)

    Copy link
    Contributor

    PR Review

    ⏱️ Estimated effort to review [1-5]

    2, because the changes are straightforward and involve updating configuration settings and adding a new model to an existing list. The logic and structure of the code remain unchanged, making it easier to review.

    🏅 Score

    95

    🧪 Relevant tests

    No

    🔍 Possible issues

    Possible Oversight: The comment for the model field in configuration.toml still mentions "gpt-4" as the default model, but the actual default has been changed to "gpt-4-turbo-preview". This might cause confusion and should be updated for clarity.

    🔒 Security concerns

    No

    🔀 Multiple PR themes

    No


    ✨ Review tool usage guide:

    Overview:
    The review tool scans the PR code changes, and generates a PR review which includes several types of feedbacks, such as possible PR issues, security threats and relevant test in the PR. More feedbacks can be added by configuring the tool.

    The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.

    • When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:
    /review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
    
    [pr_reviewer]
    some_config1=...
    some_config2=...
    

    See the review usage page for a comprehensive guide on using this tool.

    Copy link
    Contributor

    codiumai-pr-agent-pro bot commented Mar 26, 2024

    PR Code Suggestions

    CategorySuggestions                                                                                                                                                       
    Maintainability
    Extract repeated literal values into a constant variable for maintainability.

    Consider extracting the token count (128000) into a constant variable since it is repeated
    multiple times. This will make future updates easier and the code more maintainable.

    pr_agent/algo/init.py [11-13]

    -'gpt-4-1106-preview': 128000, # 128K, but may be limited by config.max_model_tokens
    -'gpt-4-0125-preview': 128000,  # 128K, but may be limited by config.max_model_tokens
    -'gpt-4-turbo-preview': 128000,  # 128K, but may be limited by config.max_model_tokens
    +MAX_TOKENS = 128000  # 128K, but may be limited by config.max_model_tokens
    +'gpt-4-1106-preview': MAX_TOKENS,
    +'gpt-4-0125-preview': MAX_TOKENS,
    +'gpt-4-turbo-preview': MAX_TOKENS,
     
    Best practice
    Remove commented-out code to avoid confusion and maintain a clean configuration file.

    It's recommended to remove the commented-out model setting (# "gpt-4-turbo-preview") to
    avoid confusion and keep the configuration file clean. If this is meant for documentation
    purposes, consider adding a separate documentation section or file.

    pr_agent/settings/configuration.toml [2]

    -model="gpt-4" # "gpt-4-turbo-preview"
    +model="gpt-4"
     

    ✨ Improve tool usage guide:

    Overview:
    The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.

    • When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:
    /improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
    
    [pr_code_suggestions]
    some_config1=...
    some_config2=...
    

    See the improve usage page for a comprehensive guide on using this tool.

    model="gpt-4" # "gpt-4-0125-preview"
    model_turbo="gpt-4-0125-preview"
    model="gpt-4" # "gpt-4-turbo-preview"
    model_turbo="gpt-4-turbo-preview"
    Copy link
    Collaborator

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Definitely not this.

    the default model should be fixed.

    I don't want the default model to be dangling, and changing without proper checks

    Copy link
    Collaborator

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    any user can change this on his local deployment, if he wishes

    Copy link
    Contributor Author

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Thank you for bringing this to my attention. I completely agree with your point.

    @riya-amemiya riya-amemiya changed the title add support for gpt-4-turbo-preview model and update default settings add support for gpt-4-turbo-preview model Mar 26, 2024
    @mrT23 mrT23 merged commit 26c4a98 into Codium-ai:main Mar 26, 2024
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants