-
Notifications
You must be signed in to change notification settings - Fork 59
fix: fix bds baseline #108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @ChenZiHong-Gavin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request streamlines the initialization of the Language Model (LLM) client within the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the BDS baseline to decouple it from a specific LLM client implementation. By replacing OpenAIClient with the BaseLLMWrapper abstraction and using a factory function init_llm for instantiation, the code becomes more modular and easier to maintain. The changes are well-implemented. I have one suggestion to further improve the BDS class constructor for better flexibility and code clarity.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the BDS baseline to use a more generic BaseLLMWrapper and a factory function init_llm for LLM client initialization. This is a good improvement for modularity and decouples the baseline from a specific LLM client implementation. The changes simplify the instantiation of the BDS class. I've suggested an improvement to the new __init__ method to make it more robust against configuration errors and to restore the configurability of max_concurrent that was present in the previous dataclass-based implementation.
| def __init__(self, llm_client: BaseLLMWrapper = None, max_concurrent: int = 1000): | ||
| self.llm_client: BaseLLMWrapper = llm_client or init_llm( | ||
| "synthesizer" | ||
| ) | ||
| self.max_concurrent: int = max_concurrent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This refactoring from a @dataclass to a regular class is a good step towards better dependency management. However, there are a couple of improvements that could be made:
- Robustness: The
init_llmfunction can returnNoneif the synthesizer configuration is missing from the environment variables. This would causeself.llm_clientto beNone, leading to a runtimeAttributeErrorlater. It's best to add a check and fail early if the client can't be initialized. - Configurability: The
max_concurrentattribute is now hardcoded to1000. The previous dataclass implementation allowed this to be configured at instantiation. It would be beneficial to restore this flexibility by making it an__init__parameter.
def __init__(
self,
synthesizer_llm_client: BaseLLMWrapper = None,
max_concurrent: int = 1000,
):
self.llm_client: BaseLLMWrapper = synthesizer_llm_client or init_llm(
"synthesizer"
)
if not self.llm_client:
raise ValueError(
"LLM client for synthesizer could not be initialized. "
"Check your environment variables for the SYNTHESIZER backend."
)
self.max_concurrent: int = max_concurrent* fix: fix bds baseline * Update baselines/BDS/bds.py Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
No description provided.