Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some resources for demo clarity #20

Merged
merged 10 commits into from
Jul 28, 2023
Merged

Add some resources for demo clarity #20

merged 10 commits into from
Jul 28, 2023

Conversation

Wirg
Copy link
Collaborator

@Wirg Wirg commented Jul 28, 2023

No description provided.

@Wirg Wirg marked this pull request as ready for review July 28, 2023 13:00
@github-actions
Copy link

github-actions bot commented Jul 28, 2023

LOGAF Level 3 - /home/runner/work/GenossGPT/GenossGPT/demo/main.py

  1. The selected_model.configure_open_ai_module() method is called every time the sidebar is rendered. This could lead to unnecessary network requests if the selected model does not change. Consider moving this method call outside of the sidebar rendering code.

  2. The st.session_state object is directly modified. This could lead to unexpected behavior if the state is modified elsewhere in the codebase. Consider using a more encapsulated approach to manage the state.

Example:

class SessionState:
    def __init__(self):
        self.state = {}

    def get(self, key, default=None):
        return self.state.get(key, default)

    def set(self, key, value):
        self.state[key] = value
  1. The openai.ChatCompletion.create method is called without error handling. This could lead to unexpected behavior if the network request fails. Consider adding error handling to this method call.

LOGAF Level 3 - /home/runner/work/GenossGPT/GenossGPT/demo/constants/model_configs.py

  1. The configure_open_ai_module method is directly modifying the openai module's attributes. This could lead to unexpected behavior if the openai module is used elsewhere in the codebase. Consider using a more encapsulated approach to configure the openai module.

Example:

class OpenAIConfigurator:
    def __init__(self, api_key, endpoint_url):
        self.api_key = api_key
        self.endpoint_url = endpoint_url

    def configure(self):
        openai.api_key = self.api_key
        openai.api_base = self.endpoint_url
  1. The AVAILABLE_MODELS list is defined at the module level. This could lead to issues if the SETTINGS or openai module state changes after this module is imported. Consider moving the initialization of this list into a function that can be called when needed.

LOGAF Level 3 - /home/runner/work/GenossGPT/GenossGPT/demo/widgets/genoss_backend_connection.py

  1. The display_message_if_failing_to_access_genoss and add_custom_hf_endpoint_if_available_or_display_warning functions are directly modifying the Streamlit UI. This could lead to unexpected behavior if these functions are called multiple times. Consider returning the messages from these functions and displaying them in the Streamlit UI separately.

Example:

def display_message_if_failing_to_access_genoss() -> str:
    try:
        SETTINGS.ping_genoss_backend()
        return ''
    except RequestException:
        return 'The demo could not access the Genoss backend.'
  1. The AVAILABLE_MODELS list is directly modified. This could lead to unexpected behavior if this list is used elsewhere in the codebase. Consider returning the new model from the add_custom_hf_endpoint_if_available_or_display_warning function and adding it to the AVAILABLE_MODELS list separately.

馃攧馃敀馃敡


Powered by Code Review GPT

Makefile Show resolved Hide resolved
demo/constants/settings.py Show resolved Hide resolved
pyproject.toml Show resolved Hide resolved
@Wirg Wirg merged commit e299352 into main Jul 28, 2023
7 checks passed
@Wirg Wirg deleted the add-model-routing branch July 28, 2023 14:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants