Skip to content

【Feature】add SWE-bench example configs and bilingual user guide#191

Merged
SJTUyh merged 4 commits intoAISBench:masterfrom
GaoHuaZhang:swe_bench_support
Apr 10, 2026
Merged

【Feature】add SWE-bench example configs and bilingual user guide#191
SJTUyh merged 4 commits intoAISBench:masterfrom
GaoHuaZhang:swe_bench_support

Conversation

@GaoHuaZhang
Copy link
Copy Markdown
Collaborator

@GaoHuaZhang GaoHuaZhang commented Mar 14, 2026

PR Type / PR类型

  • Feature(功能新增)
  • Bugfix(Bug 修复)
  • Docs(文档更新)
  • CI/CD(持续集成/持续部署)
  • Refactor(代码重构)
  • Perf(性能优化)
  • Dependency(依赖项更新)
  • Test-Cases(测试用例更新)
  • Other(其他)

Related Issue | 关联 Issue

无 / N/A(如有请改为:Fixes # / Relates to #

🔍 Motivation / 变更动机

为在 ais_bench 中跑通 SWE-bench(mini-swe-agent 推理 + SWE-bench harness 评测)提供开箱即用的示例配置,并补充中英文说明,降低首次接入与排障成本。

📝 Modification / 修改内容

  • 新增目录 ais_bench/configs/swe_bench_examples/
  • 4 份示例配置(结构一致,仅数据集 name / abbr 不同):
    • mini_swe_agent_swe_bench_lite.pylite
    • mini_swe_agent_swe_bench_verified.pyverified
    • mini_swe_agent_swe_bench_full.pyfull
    • mini_swe_agent_swe_bench_multilingual.pymultilingual
  • 各配置包含:SWEBenchDatasetSWEBenchInferTask / SWEBenchEvalTaskSWEBenchSummarizerNaivePartitionerLocalRunner;默认 step_limit=200path="" 便于在线从 Hugging Face 加载;模型端需用户填写 model / url / api_key 等。
  • 文档README_en.mdREADME_zh_cn.md,涵盖能力概览、依赖(mini-swe-agent、SWE-bench harness、Docker)、最小配置、运行命令(all / infer / eval--reuse)、输出目录与指标说明,以及常见 SWEB-* 错误码与 FAQ 引用。

📐 Associated Test Results / 关联测试结果

待补充 CI 链接或本地冒烟结果(例如在仓库根目录执行 ais_bench ais_bench/configs/swe_bench_examples/mini_swe_agent_swe_bench_lite.py 等)。

⚠️ BC-breaking (Optional) / 向后不兼容变更(可选)

否。本 PR 仅新增文件,无破坏性变更。

⚠️ Performance degradation (Optional) / 性能下降(可选)

无。

🌟 Use cases (Optional) / 使用案例(可选)

  • 使用 lite 配置快速验证推理与评测流水线,再切换 verified / full / multilingual
  • 仅生成补丁:ais_bench .../mini_swe_agent_swe_bench_lite.py -m infer;基于已有 predictions 评测:-m eval;中断后续跑可加 --reuse
  • 详细步骤与排障见同目录 README_zh_cn.md / README_en.md

✅ Checklist / 检查列表

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues. / 使用预提交或其他 linting 工具来修复潜在的 lint 问题。
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests. / 修复的 Bug 已完全由单元测试覆盖,导致 Bug 的情况应在单元测试中添加。
  • The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness. / 此拉取请求中的修改已完全由单元测试覆盖。如果不是,请添加更多单元测试以确保正确性。
  • All relevant documentation (API docs, docstrings, example tutorials) has been updated to reflect these changes. / 所有相关文档(API 文档、文档字符串、示例教程)已更新以反映这些更改。

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects. / 如果此拉取请求对下游或其他相关项目有潜在影响,应在那些项目中测试此 PR。
  • CLA has been signed and all committers have signed the CLA in this PR. / CLA 已签署,且本 PR 中的所有提交者均已签署 CLA。

👥 Collaboration Info / 协作信息

  • Suggested Reviewers / 建议审核人: @xxx
  • Relevant Module Owners / 相关模块负责人: @xxx
  • Other Collaboration Notes / 其他协作说明:

🌟 Useful CI Command / 实用的CI命令

Command / 命令 Introduction / 介绍
/gemini review Performs a code review for the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 执行代码审核。
/gemini summary Provides a summary of the current pull request in its current state by Gemini. / 对当前拉取请求在当前状态下由 Gemini 提供摘要。
/gemini help Displays a list of available commands of Gemini. / 显示 Gemini 可用命令的列表。
/readthedocs build Triggers a build of the documentation for the current pull request in its current state by Read the Docs. / 触发当前拉取请求在当前状态下由 Read the Docs 构建文档。

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the benchmarking framework by integrating full support for SWEBench datasets. It introduces new components for loading, inferring, and evaluating models against SWEBench, alongside improvements to the configuration system for greater flexibility and automation. The changes streamline the process of setting up and running complex benchmarks, particularly for code generation and repair tasks, by abstracting away common configuration patterns and providing robust task execution mechanisms.

Highlights

  • SWEBench Integration: Added comprehensive support for SWEBench, including new dataset loaders, inference tasks, and evaluation tasks, enabling the framework to run and evaluate models on SWEBench datasets.
  • Dynamic Configuration Handling: Introduced a recursive configuration type conversion utility and updated core configuration management to automatically fill in default inference, reader, and evaluation configurations for datasets if they are not explicitly defined.
  • Worker Refactoring: Refactored the Infer and Eval workers to improve flexibility and maintainability, allowing them to dynamically handle different task types and merge configurations more robustly.
  • Example Configurations: Provided new example configuration files for SWEBench Lite and SWEBench Verified datasets, demonstrating how to set up and run benchmarks using the newly integrated features.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • ais_bench/benchmark/cli/config_manager.py
    • Imported recur_convert_config_type for recursive configuration processing.
    • Simplified DATASET_REQUIRED_FIELDS by removing reader_cfg, infer_cfg, and eval_cfg.
    • Invoked recur_convert_config_type on the configuration object before dumping it.
  • ais_bench/benchmark/cli/utils.py
    • Imported ConfigDict and Config from mmengine.config.
    • Added a new function recur_convert_config_type to recursively convert config types to string representations.
  • ais_bench/benchmark/cli/workers.py
    • Modified Infer worker to use direct class references for OpenICLApiInferTask and OpenICLInferTask instead of get_config_type.
    • Refactored Infer worker's update_cfg method to conditionally merge new inference configurations and update runner parameters.
    • Modified Eval worker to use direct class references for NaivePartitioner and LocalRunner.
    • Refactored Eval worker's update_cfg method to conditionally merge new evaluation configurations and update runner parameters, including debug, dump details, and extract rate flags.
  • ais_bench/benchmark/datasets/init.py
    • Added an import for the new swebench module to make SWEBenchDataset discoverable.
  • ais_bench/benchmark/datasets/swebench.py
    • Added a new file defining the SWEBenchDataset class.
    • Implemented filter_instances method to filter and shuffle dataset instances based on a filter specification.
    • Implemented load method to load SWEBench datasets from parquet files or directly from Hugging Face, supporting various SWEBench variants.
  • ais_bench/benchmark/tasks/init.py
    • Added imports for SWEBenchInferTask and SWEBenchEvalTask to register them in the task registry.
  • ais_bench/benchmark/tasks/swebench_eval.py
    • Added a new file defining the SWEBenchEvalTask class.
    • Implemented the run method to evaluate SWE-bench predictions using the official SWE-bench harness.
    • Included logic for handling prediction file paths, output directories, and error reporting for the evaluation process.
  • ais_bench/benchmark/tasks/swebench_infer.py
    • Added a new file defining the SWEBenchInferTask class.
    • Implemented the run method to execute mini-swe-agent on SWE-bench instances for inference.
    • Included helper functions _get_minisweagent_config to adapt ais_bench model configurations to mini-swe-agent's format.
    • Introduced _AISBenchProgressManager and _CompositeProgressManager for detailed progress reporting during inference.
    • Managed concurrent execution of instances using ThreadPoolExecutor and integrated with rich for live progress display.
  • ais_bench/benchmark/utils/config/run.py
    • Imported PromptTemplate, ZeroRetriever, GenInferencer, and AccEvaluator from ais_bench.benchmark.openicl.
    • Enhanced try_fill_in_custom_cfgs to automatically populate infer_cfg, reader_cfg, and eval_cfg with default OpenICL configurations for datasets if they are not present.
  • ais_bench/configs/swe_bench_examples/swe_bench_lite.py
    • Added a new example configuration file for benchmarking with the SWEBench Lite dataset.
    • Defined model, dataset, summarizer, infer, and eval configurations specific to SWEBench Lite.
  • ais_bench/configs/swe_bench_examples/swe_bench_verified.py
    • Added a new example configuration file for benchmarking with the SWEBench Verified dataset.
    • Defined model, dataset, summarizer, infer, and eval configurations specific to SWEBench Verified.
Activity
  • The pull request introduces new features and modifications across multiple files, indicating initial development work.
  • No human comments or reviews have been recorded yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the SWE-Bench benchmark, including new dataset loaders, inference tasks, and evaluation tasks. The changes are extensive and introduce new dependencies like mini-swe-agent. I've found a few critical issues that will prevent the new workflow from running correctly, related to file handling and incorrect assumptions about library functions. There are also some correctness and performance issues that should be addressed. Please see my detailed comments below.

self.model_cfg,
dataset_cfg,
osp.join(self.work_dir, self.output_subdir),
file_extension="json",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The SWE-Bench evaluation harness expects a .jsonl file for predictions. Using a .json extension here will cause a FileNotFoundError in the evaluation step.

Suggested change
file_extension="json",
file_extension="jsonl",

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

swebench输出结果为json格式

Comment on lines +287 to +289
preds_path = out_dir / "preds.json"
if preds_path.exists():
shutil.move(preds_path, out_path)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The process_instance function from mini-swe-agent only creates per-instance trajectory files and does not generate the final aggregated prediction file. You need to add a step to collect the predictions from these trajectory files into a single preds.jsonl file after all instances have been processed. The mini-swe-agent library provides utilities for this. Also, the filename should be preds.jsonl, not preds.json.

Suggested change
preds_path = out_dir / "preds.json"
if preds_path.exists():
shutil.move(preds_path, out_path)
from minisweagent.run.benchmarks.utils.run_utils import get_predictions_from_trajectories
self.logger.info(f"Collecting predictions from trajectories in {out_dir}...")
get_predictions_from_trajectories(str(out_dir))
preds_path = out_dir / "preds.jsonl"
if preds_path.exists():
shutil.move(str(preds_path), out_path)

f"Invalid swebench dataset name, expected one of {list(DATASET_MAPPING.keys())} but got {name}",
)
try:
dataset = load_dataset("parquet", data_files={split: path})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The function load_dataset("parquet", data_files=...) returns a DatasetDict object, not a Dataset. The subsequent code on line 68, list(dataset), will then operate on the keys of this dictionary (e.g., ['test']) instead of the dataset records, which will cause a TypeError inside filter_instances. You need to select the appropriate split from the DatasetDict before processing it.

Suggested change
dataset = load_dataset("parquet", data_files={split: path})
dataset = load_dataset("parquet", data_files={split: path})[split]

Comment on lines +17 to +32
for dataset_cfg in config["datasets"]:
if "infer_cfg" not in dataset_cfg:
logger.debug(f"Filling in infer config for dataset {dataset_cfg['abbr']}")
dataset_cfg["infer_cfg"] = dict(
prompt_template=dict(type=get_config_type(PromptTemplate), template="{dummy}"),
retriever=dict(type=get_config_type(ZeroRetriever)),
inferencer=dict(type=get_config_type(GenInferencer)),
)
if "reader_cfg" not in dataset_cfg:
logger.debug(f"Filling in reader config for dataset {dataset_cfg['abbr']}")
dataset_cfg["reader_cfg"] = dict(input_columns=["dummy"], output_column="dummy")
if "eval_cfg" not in dataset_cfg:
logger.debug(f"Filling in eval config for dataset {dataset_cfg['abbr']}")
dataset_cfg["eval_cfg"] = dict(
evaluator=dict(type=get_config_type(AccEvaluator)),
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The function get_config_type is called here, but its definition appears later in the file. This will result in a NameError at runtime. Please ensure that get_config_type is defined before it is called by moving its definition to a location before the try_fill_in_custom_cfgs function.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment thread ais_bench/benchmark/cli/utils.py Outdated
Comment on lines +37 to +38
for i, item in enumerate(cfg):
cfg[i] = recur_convert_config_type(item) if isinstance(item, (dict, ConfigDict, Config, list)) else item
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The recursive call for list items can be simplified. Instead of checking the type of the item before the recursive call, you can just call recur_convert_config_type on every item. The function already handles non-container types by returning them as is.

Suggested change
for i, item in enumerate(cfg):
cfg[i] = recur_convert_config_type(item) if isinstance(item, (dict, ConfigDict, Config, list)) else item
for i, item in enumerate(cfg):
cfg[i] = recur_convert_config_type(item)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

DSET_CODES.DATA_PREPROCESSING_ERROR,
f"Failed to load swebench dataset {name} from Hugging Face with error: {e}.",
)
dataset = self.filter_instances(list(dataset), filter_spec=filter_spec, shuffle=shuffle)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Converting the entire dataset to a list using list(dataset) can be very memory-intensive, especially for large datasets, and may lead to out-of-memory errors. It is more efficient to use the .filter() method provided by the datasets library, which processes the data in a streaming fashion without loading everything into memory at once. Consider refactoring filter_instances to work directly with Dataset objects.

@GaoHuaZhang GaoHuaZhang had a problem deploying to smoke-test-approval April 7, 2026 08:10 — with GitHub Actions Failure
@GaoHuaZhang GaoHuaZhang had a problem deploying to smoke-test-approval April 7, 2026 11:06 — with GitHub Actions Failure
@GaoHuaZhang GaoHuaZhang had a problem deploying to smoke-test-approval April 7, 2026 13:00 — with GitHub Actions Failure
@GaoHuaZhang GaoHuaZhang temporarily deployed to smoke-test-approval April 8, 2026 13:06 — with GitHub Actions Inactive
@GaoHuaZhang GaoHuaZhang temporarily deployed to smoke-test-approval April 9, 2026 01:16 — with GitHub Actions Inactive
@GaoHuaZhang GaoHuaZhang temporarily deployed to smoke-test-approval April 9, 2026 09:45 — with GitHub Actions Inactive
@GaoHuaZhang GaoHuaZhang had a problem deploying to smoke-test-approval April 9, 2026 09:59 — with GitHub Actions Failure
@GaoHuaZhang GaoHuaZhang temporarily deployed to smoke-test-approval April 9, 2026 11:05 — with GitHub Actions Inactive

class CustomConfigChecker:
MODEL_REQUIRED_FIELDS = ['type', 'abbr', 'attr']
DATASET_REQUIRED_FIELDS = ['type', 'abbr', 'reader_cfg', 'infer_cfg', 'eval_cfg']
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】'type'也是非必须的,限制也可以解除,不必校验

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不影响当前执行逻辑,后续有需要可以去掉

@GaoHuaZhang GaoHuaZhang temporarily deployed to smoke-test-approval April 10, 2026 08:30 — with GitHub Actions Inactive
@GaoHuaZhang GaoHuaZhang temporarily deployed to smoke-test-approval April 10, 2026 12:18 — with GitHub Actions Inactive
@GaoHuaZhang GaoHuaZhang changed the title 【Feature】SWEBench Support 【Feature】add SWE-bench example configs and bilingual user guide Apr 10, 2026
class CustomConfigChecker:
MODEL_REQUIRED_FIELDS = ['type', 'abbr', 'attr']
DATASET_REQUIRED_FIELDS = ['type', 'abbr', 'reader_cfg', 'infer_cfg', 'eval_cfg']
DATASET_REQUIRED_FIELDS = ['type', 'abbr']
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】'type'也是非必须的,限制也可以解除,不必校验

Comment thread ais_bench/benchmark/cli/utils.py Outdated
error_msg += f" {error_message_suffix}"
raise AISBenchConfigError(UTILS_CODES.INVALID_INTEGER_TYPE, error_msg)
raise AISBenchConfigError(
UTILS_CODES.INVALID_INTEGER_TYPE, error_msg)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review] 这样的格式修改没有必要,此行未超长,直接raise AISBenchConfigError(UTILS_CODES.INVALID_INTEGER_TYPE, error_msg)即可

Comment thread ais_bench/benchmark/cli/utils.py Outdated
if error_message_suffix:
error_msg += f" {error_message_suffix}"
raise AISBenchConfigError(UTILS_CODES.ARGUMENT_TOO_SMALL, error_msg)
raise AISBenchConfigError(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[review] 这样的格式修改没有必要,此行未超长,下同

if "infer_cfg" not in dataset_cfg:
logger.debug(f"Filling in infer config for dataset {dataset_cfg['abbr']}")
dataset_cfg["infer_cfg"] = dict(
prompt_template=dict(type=get_config_type(PromptTemplate), template="{dummy}"),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】没有infer_cfg,reader_cfg,eval_cfg的自定义配置场景也往往用不到,也没必要给个默认的取值

f"list_decorator({func.__name__}): processing single item"
)
return func(text_or_list, *args, **kwargs)
return wrapper
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

装饰器的方式更优雅,不必修改,只需要加functools.wrap装饰器保留元数据即可
要保留被装饰函数的元数据(函数名、文档字符串、参数签名等),核心是给装饰器的包装函数 wrapper 绑定原函数的元数据,Python 标准库 functools.wraps 就是专门做这件事的。

修改后的完整代码

from functools import wraps  # 导入核心工具
import logging

# 配置日志(可选,用于测试)
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)

def list_decorator(func):
    """Decorator: make the function able to handle list input"""
    @wraps(func)  # 关键:保留原函数元数据
    def wrapper(text_or_list, *args, **kwargs):
        if isinstance(text_or_list, list):
            logger.debug(
                f"list_decorator({func.__name__}): processing list of {len(text_or_list)} item(s)"
            )
            return [func(text, *args, **kwargs) for text in text_or_list]
        logger.debug(
            f"list_decorator({func.__name__}): processing single item"
        )
        return func(text_or_list, *args, **kwargs)
    return wrapper  # 修复你原代码的笔误:wrappe → wrapper

核心修改点

  1. 导入 functools.wraps
    这是 Python 内置的标准工具,专门用于装饰器中保留原函数元数据。
  2. wrapper 加装饰器 @wraps(func)
    这一行会把被装饰函数 func 的所有元数据(__name____doc____module__、参数签名等)复制到 wrapper 上。

type="LiteLLMChat",
model="",
api_key="EMPTY",
url="http://127.0.0.1:8000/v1", #API base, e.g. http://127.0.0.1:8000/v1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】注释符'#'后面要加空格,同时url的含义和常规api model中url的含义不同,这里注释需要更详细说明一下 http://127.0.0.1:8000/v1访问的是http://127.0.0.1:8000/v1/chat/completions

dict(
attr="local",
abbr="swebench",
type="LiteLLMChat",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】如果有type的话,这个地方是否能够遵循传统使用导入的具体的类,而非注册的签名?主要是为了能够直接跳转到类的定义,况且dataset配置中的type直接用类

Comment on lines +1 to +56
from ais_bench.benchmark.datasets import SWEBenchDataset
from ais_bench.benchmark.partitioners import NaivePartitioner
from ais_bench.benchmark.runners import LocalRunner
from ais_bench.benchmark.tasks import SWEBenchInferTask, SWEBenchEvalTask
from ais_bench.benchmark.summarizers import SWEBenchSummarizer

STEP_LIMIT = 200

models = [
dict(
attr="local",
abbr="swebench",
type="LiteLLMChat",
model="",
api_key="EMPTY",
url="http://127.0.0.1:8000/v1", # API base, e.g. http://127.0.0.1:8000/v1
batch_size=1,
generation_kwargs=dict(),
)
]

datasets = [
dict(
type=SWEBenchDataset,
abbr="swebench_lite",
# Relative to AIS_BENCH_DATASETS_CACHE (default: project root); missing -> HF download
path="",
name="lite",
split="test",
filter_spec="",
shuffle=False,
step_limit=STEP_LIMIT,
),
]

summarizer = dict(
attr="accuracy",
type=SWEBenchSummarizer,
)


infer = dict(
partitioner=dict(type=NaivePartitioner),
runner=dict(
type=LocalRunner,
task=dict(type=SWEBenchInferTask),
),
)

eval = dict(
partitioner=dict(type=NaivePartitioner),
runner=dict(
type=LocalRunner,
task=dict(type=SWEBenchEvalTask),
),
)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

【review】SWE-Bench这四个子集都是用相同的agent和数据集类,配置文件上除了数据集的abbr和name全是重复的,而且abbr就是由name组成的,建议4个配置文件直接归一,用户去选择不同子集即可。4个不同配置文件模型还要配4次,相对来说更麻烦。

dataset_type = "lite" # choose from ["verified", "lite", "full", "multilingual"]
datasets = [
    dict(
        type=SWEBenchDataset,
        abbr=f"swebench_{dataset_type}",
        # Relative to AIS_BENCH_DATASETS_CACHE (default: project root); missing -> HF download
        path="",
        name=dataset_type,
        split="test",
        filter_spec="",
        shuffle=False,
        step_limit=STEP_LIMIT,
    ),
]

@SJTUyh SJTUyh merged commit 8ae4fd0 into AISBench:master Apr 10, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants