Skip to content

Conversation

@ChenZiHong-Gavin
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ChenZiHong-Gavin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the initialization of the Language Model (LLM) client within the BDS baseline. It refactors the BDS class to use a more generic LLM wrapper and a dedicated initialization utility, reducing boilerplate and improving modularity in how LLM clients are configured and managed.

Highlights

  • Dependency Refactoring: Removed direct imports for os and dataclasses, and replaced specific OpenAIClient, Tokenizer imports with more abstract BaseLLMWrapper and init_llm from graphgen.bases and graphgen.operators.
  • BDS Class Modernization: Transformed the BDS class from a @dataclass to a standard class with an __init__ method. This change allows for more flexible initialization of the llm_client, defaulting to init_llm("synthesizer") if no client is provided.
  • Simplified LLM Client Setup: The main function now initializes the BDS class without explicitly creating and passing an OpenAIClient instance, relying on the BDS class's internal init_llm call.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ChenZiHong-Gavin
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the BDS baseline to decouple it from a specific LLM client implementation. By replacing OpenAIClient with the BaseLLMWrapper abstraction and using a factory function init_llm for instantiation, the code becomes more modular and easier to maintain. The changes are well-implemented. I have one suggestion to further improve the BDS class constructor for better flexibility and code clarity.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ChenZiHong-Gavin ChenZiHong-Gavin merged commit 0ea3d26 into main Dec 2, 2025
4 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the BDS baseline to use a more generic BaseLLMWrapper and a factory function init_llm for LLM client initialization. This is a good improvement for modularity and decouples the baseline from a specific LLM client implementation. The changes simplify the instantiation of the BDS class. I've suggested an improvement to the new __init__ method to make it more robust against configuration errors and to restore the configurability of max_concurrent that was present in the previous dataclass-based implementation.

Comment on lines +56 to +60
def __init__(self, llm_client: BaseLLMWrapper = None, max_concurrent: int = 1000):
self.llm_client: BaseLLMWrapper = llm_client or init_llm(
"synthesizer"
)
self.max_concurrent: int = max_concurrent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This refactoring from a @dataclass to a regular class is a good step towards better dependency management. However, there are a couple of improvements that could be made:

  1. Robustness: The init_llm function can return None if the synthesizer configuration is missing from the environment variables. This would cause self.llm_client to be None, leading to a runtime AttributeError later. It's best to add a check and fail early if the client can't be initialized.
  2. Configurability: The max_concurrent attribute is now hardcoded to 1000. The previous dataclass implementation allowed this to be configured at instantiation. It would be beneficial to restore this flexibility by making it an __init__ parameter.
    def __init__(
        self,
        synthesizer_llm_client: BaseLLMWrapper = None,
        max_concurrent: int = 1000,
    ):
        self.llm_client: BaseLLMWrapper = synthesizer_llm_client or init_llm(
            "synthesizer"
        )
        if not self.llm_client:
            raise ValueError(
                "LLM client for synthesizer could not be initialized. "
                "Check your environment variables for the SYNTHESIZER backend."
            )
        self.max_concurrent: int = max_concurrent

CHERRY-ui8 pushed a commit to CHERRY-ui8/GraphGen that referenced this pull request Dec 17, 2025
* fix: fix bds baseline

* Update baselines/BDS/bds.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@ChenZiHong-Gavin ChenZiHong-Gavin deleted the fix/fix-bds-baseline branch December 19, 2025 02:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants