Skip to content

[Benchmark] Add longbench_v2#1237

Merged
Yunnglin merged 2 commits intomainfrom
add/long_bench_v2
Mar 18, 2026
Merged

[Benchmark] Add longbench_v2#1237
Yunnglin merged 2 commits intomainfrom
add/long_bench_v2

Conversation

@Yunnglin
Copy link
Collaborator

No description provided.

Copilot AI review requested due to automatic review settings March 18, 2026 08:57
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the LongBench-v2 benchmark into the system, providing a robust method for evaluating large language models on their ability to process and understand long contexts. The changes include adding detailed descriptions and usage instructions, configuring its metadata, and implementing the necessary code for its execution and testing.

Highlights

  • New Benchmark Integration: The LongBench-v2 benchmark has been added, designed to evaluate large language models' long-context understanding capabilities across various real-world tasks.
  • Documentation Updates: Comprehensive documentation for LongBench-v2 has been added in both English and Chinese, along with updates to the supported LLM benchmarks lists.
  • Benchmark Configuration: A new metadata JSON file and a Python adapter have been introduced to properly configure and integrate LongBench-v2 into the evaluation framework.
  • Testing Enhancements: A dedicated test case for LongBench-v2 has been added to ensure its correct functionality, and performance test configurations were updated to use a different model and tokenizer.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Yunnglin Yunnglin linked an issue Mar 18, 2026 that may be closed by this pull request
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the longbench_v2 benchmark. This includes the benchmark adapter, metadata, documentation in English and Chinese, and updates to the list of supported benchmarks. A test for the new benchmark is also added.

My review focuses on improving code quality and documentation consistency. I've suggested refactoring the prompt formatting logic in the Python adapter for better readability and maintainability. I've also pointed out several formatting inconsistencies in the new documentation files, such as inconsistent spacing and missing newlines at the end of files, to improve overall document quality.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds LongBench-v2 as a new long-context multiple-choice benchmark in EvalScope, along with metadata/docs updates and a basic benchmark test entry.

Changes:

  • Introduces longbench_v2 benchmark adapter and benchmark metadata JSON.
  • Adds English/Chinese benchmark documentation and updates supported-dataset lists.
  • Adds a test_longbench_v2 benchmark test and updates one perf test model/tokenizer config.

Reviewed changes

Copilot reviewed 8 out of 9 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
evalscope/benchmarks/longbench_v2/longbench_v2_adapter.py New adapter registering LongBench-v2 and formatting prompts from dataset records.
evalscope/benchmarks/_meta/longbench_v2.json Adds generated benchmark meta/statistics/readme payload for LongBench-v2.
docs/en/get_started/supported_dataset/llm.md Adds LongBench-v2 to the English supported LLM benchmark list.
docs/zh/get_started/supported_dataset/llm.md Adds LongBench-v2 to the Chinese supported LLM benchmark list.
docs/en/benchmarks/longbench_v2.md New English benchmark documentation page for LongBench-v2.
docs/zh/benchmarks/longbench_v2.md New Chinese benchmark documentation page for LongBench-v2.
tests/benchmark/test_eval.py Adds a unit test entry to run LongBench-v2 with limit=1.
tests/perf/test_perf.py Updates the model/tokenizer used by test_run_perf_multi_parallel.

You can also share your feedback on Copilot code review. Take the survey.

@Yunnglin Yunnglin merged commit ade91fa into main Mar 18, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

support MMMLU and longbench v2?

2 participants