From ea4e0cf56e20e5df68443775bb4d5765eb214687 Mon Sep 17 00:00:00 2001 From: Julien Chaumond Date: Fri, 28 Nov 2025 15:59:57 +0100 Subject: [PATCH] Fix the quickstart description? --- README.md | 2 +- docs/source/index.mdx | 2 +- src/lighteval/main_inspect.py | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index e0bb47dd1..cdead9b8a 100644 --- a/README.md +++ b/README.md @@ -124,7 +124,7 @@ Lighteval offers the following entry points for model evaluation: Did not find what you need ? You can always make your custom model API by following [this guide](https://huggingface.co/docs/lighteval/main/en/evaluating-a-custom-model) - `lighteval custom`: Evaluate custom models (can be anything) -Here's a **quick command** to evaluate using the *Accelerate backend*: +Here's a **quick command** to evaluate using a remote inference service: ```shell lighteval eval "hf-inference-providers/openai/gpt-oss-20b" gpqa:diamond diff --git a/docs/source/index.mdx b/docs/source/index.mdx index 0c265cdb6..00721c256 100644 --- a/docs/source/index.mdx +++ b/docs/source/index.mdx @@ -9,7 +9,7 @@ and see how your models stack up. ### 🚀 **Multi-Backend Support** Evaluate your models using the most popular and efficient inference backends: -- `eval`: Use [inspect-ai](https://inspect.aisi.org.uk/) as backend to evaluate and inspect your models ! (prefered way) +- `eval`: Use [inspect-ai](https://inspect.aisi.org.uk/) as backend to evaluate and inspect your models! (prefered way) - `transformers`: Evaluate models on CPU or one or more GPUs using [🤗 Accelerate](https://github.com/huggingface/transformers) - `nanotron`: Evaluate models in distributed settings using [⚡️ diff --git a/src/lighteval/main_inspect.py b/src/lighteval/main_inspect.py index 206a09355..4aa5cbf87 100644 --- a/src/lighteval/main_inspect.py +++ b/src/lighteval/main_inspect.py @@ -565,4 +565,4 @@ def bundle(log_dir: str, output_dir: str, overwrite: bool = True, repo_id: str | "tiny_benchmarks", ] model = "hf-inference-providers/meta-llama/Llama-3.1-8B-Instruct:nebius" - eval(models=[model], tasks=task) + eval(models=[model], tasks=tasks[0])