Skip to content

Conversation

@FrankLeeeee
Copy link
Collaborator

@FrankLeeeee FrankLeeeee commented Dec 16, 2025

Motivation

I want to have more holistic benchmark results.

Modifications

I added more benchmarks.

Related Issues

Accuracy Test

Benchmark & Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @FrankLeeeee, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's benchmarking infrastructure by integrating a diverse set of new evaluation datasets and refining the benchmark execution process. It introduces a more flexible naming convention for benchmark results and incorporates specialized benchmarkers for tasks ranging from finance-specific question answering to multiple-choice reasoning and code generation. These additions aim to provide a more comprehensive and robust framework for assessing model performance across various domains.

Highlights

  • Enhanced Benchmark Customization: A new --name argument has been added to bench_eagle3.py, allowing users to specify a custom name for benchmark runs. This name will be included in the output file name for better organization and identification of results.
  • Expanded Benchmark Suite: Six new benchmark datasets have been integrated into the system: GPQA, FinanceQA, SciEval, MMLU, LiveCodeBench, and SimpleQA. Each is implemented with its own dedicated Benchmarker class, significantly broadening the scope of model evaluation capabilities.
  • Refined Benchmark Configuration: The cmmlu:200 benchmark has been removed from the default list of benchmarks in bench_eagle3.py, and associated commented-out code has been cleaned up, streamlining the default benchmark execution.
  • Modular Benchmark Implementations: Each new benchmark is implemented in a separate Python file within the benchmarks/benchmarker/ directory. These implementations adhere to a consistent Benchmarker interface, handling data loading, question generation, and (where applicable) answer extraction and accuracy computation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds several new benchmarks. The implementation is mostly good, but there are a number of copy-paste errors in docstrings across the new files. I've also found a critical bug in the MMLU benchmark implementation that would lead to incorrect accuracy reporting, along with some other medium-severity issues like unused imports, leftover print statements, and non-deterministic behavior in one of the benchmarks. Please see my detailed comments for suggestions.

@sleepcoo sleepcoo merged commit ef165ac into main Dec 16, 2025
5 checks passed
@sleepcoo sleepcoo deleted the feature/benchmarks branch December 16, 2025 11:04
xiaomin-D pushed a commit to eigen-ai-labs/SpecForge_public that referenced this pull request Jan 10, 2026
* added more benchmarks

* polish

* polish

* polish
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants