Skip to content

feature: Integrate accuracy into run benchmarks (part of #4)#93

Merged
arekay-nv merged 3 commits intomainfrom
arekay/integrate_accuracy_datasets
Jan 12, 2026
Merged

feature: Integrate accuracy into run benchmarks (part of #4)#93
arekay-nv merged 3 commits intomainfrom
arekay/integrate_accuracy_datasets

Conversation

@arekay-nv
Copy link
Copy Markdown
Collaborator

@arekay-nv arekay-nv commented Jan 12, 2026

What does this PR do?

Integrates the accuracy runs into the main benchmark execution by reading configs from the config file. Sample sglang_gptoss file is included.

Type of change

  • Bug fix
  • New feature
  • Documentation update
  • Refactor/cleanup

Related issues

Testing

  • Tests added/updated
  • All tests pass locally
  • Manual testing completed

Checklist

  • Code follows project style
  • Pre-commit hooks pass
  • Documentation updated (if needed)

@arekay-nv arekay-nv requested a review from a team as a code owner January 12, 2026 16:07
Copilot AI review requested due to automatic review settings January 12, 2026 16:07
@github-actions
Copy link
Copy Markdown

github-actions bot commented Jan 12, 2026

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@github-actions github-actions bot requested a review from nvzhihanj January 12, 2026 16:07
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @arekay-nv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the benchmarking framework by seamlessly integrating accuracy evaluation. Previously, accuracy runs might have been separate; now, they can be defined and executed alongside performance benchmarks using a unified configuration. This change provides a more holistic view of model performance by combining throughput, latency, and accuracy metrics in a single benchmark run, improving the efficiency and completeness of model assessment.

Highlights

  • Unified Benchmarking: Integrated accuracy evaluation directly into the main benchmark execution, allowing both performance and accuracy metrics to be configured and run from a single YAML file.
  • Configurable Accuracy: Introduced new configuration options (AccuracyConfig) within the Dataset schema to specify evaluation methods, ground truth columns, and response extractors for accuracy datasets.
  • Dynamic Extractor Loading: Implemented a registration system for Extractor subclasses, enabling dynamic lookup and instantiation of extractors (e.g., abcd_extractor, boxed_math_extractor) based on configuration.
  • Enhanced Data Handling: Updated data loading mechanisms to ensure compatibility with various data types (e.g., pyarrow for Parquet, converting NumPy arrays to lists for msgspec) and improved handling of predefined datasets.
  • Comprehensive Reporting: Modified the benchmark runner to process accuracy datasets, calculate scores using PassAt1Scorer, and log the results alongside performance metrics.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR integrates accuracy evaluation into the main benchmark execution by reading accuracy configurations from the config file. It enables running accuracy benchmarks alongside performance benchmarks by introducing dataset-specific accuracy configurations and a registration system for extractors.

Changes:

  • Added registration system for Extractor classes to allow lookup by name
  • Modified dataset loading to support both performance and accuracy datasets from config
  • Updated scoring to handle optional ground truth columns with default fallback
  • Added example configuration file demonstrating accuracy dataset setup

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/inference_endpoint/evaluation/scoring.py Added default ground truth column and assertion for validation
src/inference_endpoint/evaluation/extractor.py Implemented registration system for Extractor subclasses
src/inference_endpoint/evaluation/init.py Exported Extractor class
src/inference_endpoint/dataset_manager/predefined/gpqa/init.py Added dataset_id registration and fixed return type
src/inference_endpoint/dataset_manager/predefined/aime25/init.py Added dataset_id registration and fixed return type
src/inference_endpoint/dataset_manager/factory.py Refactored to accept Dataset config object instead of individual parameters
src/inference_endpoint/dataset_manager/dataset.py Fixed parquet loading and numpy array serialization issues
src/inference_endpoint/dataset_manager/init.py Exported predefined dataset classes
src/inference_endpoint/config/schema.py Added AccuracyConfig schema and made dataset path optional
src/inference_endpoint/commands/benchmark.py Integrated accuracy dataset loading and evaluation into benchmark execution
examples/04_GPTOSS120B_Example/sglang_gptoss_120b_example.yaml Added example configuration with accuracy datasets

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates accuracy evaluation into the benchmark runs, which is a great feature addition. The changes introduce new configuration options for accuracy datasets and the logic to process and score them. My review focuses on improving the robustness and maintainability of the new implementation. I've identified a few critical issues related to potential UnboundLocalError and IndexError that could occur depending on the dataset configuration, which should be addressed. I've also included suggestions to make the code more dynamic and less reliant on hardcoded values, and to improve overall code quality.

@arekay-nv arekay-nv force-pushed the arekay/integrate_accuracy_datasets branch from 2d05e0e to 6e4bad7 Compare January 12, 2026 16:19
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
@arekay-nv arekay-nv force-pushed the arekay/integrate_accuracy_datasets branch from 6e4bad7 to d67ae1c Compare January 12, 2026 16:31
Copilot AI review requested due to automatic review settings January 12, 2026 16:31
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Signed-off-by: Rashid Kaleem <230885705+arekay-nv@users.noreply.github.com>
Copilot AI review requested due to automatic review settings January 12, 2026 19:28
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 11 out of 11 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@arekay-nv arekay-nv requested a review from a team January 12, 2026 19:34
@arekay-nv arekay-nv merged commit 0682950 into main Jan 12, 2026
4 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Jan 12, 2026
@@ -196,8 +196,7 @@ def create_transforms(cls) -> list:
]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the AddStaticColumns transform, if streaming is not enabled, this key should be removed. Can you add a stream: bool = True kwarg to both get_dataloader and create_transforms?

api_key: null
api_type: "sglang"

report_dir: "results/sglang_gptoss_120b_benchmark_mlperf_13_JAN_26/"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit - remove the date from the report_dir

metadata = {
"model": model_name,
"stream": enable_streaming,
"max_completion_tokens": max_tokens,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only works for OpenAI adapter btw

@nvzhihanj nvzhihanj changed the title feature: Integrate accuracy into run benchmarks feature: Integrate accuracy into run benchmarks (part of #4) Jan 13, 2026
@arekay-nv arekay-nv deleted the arekay/integrate_accuracy_datasets branch January 21, 2026 03:35
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants