Fix MTEBEvaluator: device mapping, padding-free inference, last-token pooling, L2 normalization#2415
Merged
Fix MTEBEvaluator: device mapping, padding-free inference, last-token pooling, L2 normalization#2415
Conversation
…nsformer Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Qwen3-Embedding uses last-token pooling (not mean pooling) and L2 normalization, matching its SentenceTransformer pipeline config: - pooling_mode_lasttoken: true - 2_Normalize module This fixes the ~17% score drop between HF and exported model evaluation. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Last-token pooling made scores worse (0.378 vs 0.651 with mean pooling), likely due to GenAI hidden_states not aligning with HF tokenizer attention_mask positions. Reverting pooling to mean while keeping L2 normalization which should still improve scores. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Temporary debug logging — remove before merge. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
GenAI hidden_states shape matches input_ids shape exactly (including padding positions), so last-token pooling via attention_mask is correct. Debug logging kept temporarily for verification. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
GenAI Generator doesn't accept attention_mask, so padded batches produce contaminated hidden states. Fix: process each sentence individually with only its real tokens, then take last-token pooling. This should close the gap between HF (0.785) and GenAI (0.651) scores. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Contributor
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Improves correctness and consistency of MTEB embedding evaluation across HF / ORT / GenAI backends by aligning device strings, pooling strategy, padding behavior, and embedding normalization.
Changes:
- Map Olive
gpu/gpu:<idx>device strings to PyTorchcuda/cuda:<idx>for SentenceTransformer initialization. - Switch ORT + GenAI wrappers from mean pooling to last-token pooling; avoid padding in GenAI by encoding each sequence using only real tokens.
- Add L2 normalization to ORT embeddings to match SentenceTransformer’s Normalize module.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| olive/evaluator/olive_evaluator.py | Normalizes Olive device strings to PyTorch-compatible cuda strings in HF evaluation path. |
| olive/evaluator/mteb_ort.py | Adds L2 normalization, switches pooling to last-token, and removes padding from GenAI inference by per-sample processing. |
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…-a-time Group sequences with equal real token counts into a single Generator call, reducing per-sample overhead while still avoiding padding contamination. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
jambayk
approved these changes
Apr 16, 2026
xiaoyu-work
pushed a commit
that referenced
this pull request
Apr 17, 2026
… pooling, L2 normalization (#2415)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes several issues in the MTEBEvaluator for embedding model evaluation:
Device mapping
Maps Olive's
Device.GPU("gpu") to PyTorch's"cuda"when initializingSentenceTransformerin the HF evaluation path. Also handles indexed devices (e.g.gpu:0→cuda:0).Padding-free inference for GenAI
GenAI's
Generatordoes not accept anattention_mask, so padded batches produce contaminated hidden states via self-attention to padding tokens. Fix: process each sentence individually with only its real tokens, eliminating padding entirely.Last-token pooling
Replaced mean pooling with last-token pooling in the GenAI and ORT wrappers to match models like Qwen3-Embedding that use
pooling_mode_lasttoken=True.L2 normalization
Added L2 normalization after pooling in the base
encode()method, matching the2_Normalizemodule in the SentenceTransformer pipeline.Results
These fixes close the score gap between HF and GenAI evaluation: