Extract RAI scorer token metrics into Score metadata and save to memory#45865
Merged
slister1001 merged 2 commits intoAzure:mainfrom Mar 24, 2026
Merged
Conversation
- Extract token usage (prompt_tokens, completion_tokens, total_tokens) from RAI service eval_result via sample.usage or result properties.metrics - Add token_usage to score_metadata dict in RAIServiceScorer - Save scores to PyRIT CentralMemory after creation (fail-safe) - Propagate scorer token_usage through ResultProcessor to output item properties.metrics for downstream aggregation - Add 5 unit tests covering token extraction, memory save, and error handling Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Contributor
There was a problem hiding this comment.
Pull request overview
This PR enhances red teaming result fidelity by extracting RAI scorer token usage into score metadata, persisting scores into PyRIT CentralMemory, and propagating token metrics into output item properties.metrics for downstream aggregation.
Changes:
- Extract token usage (
prompt_tokens,completion_tokens,total_tokens,cached_tokens) from RAI evaluation results and attach it toScore.score_metadata. - Save created scores into PyRIT
CentralMemory(best-effort) to support later retrieval (e.g.,attack_result.last_score). - Propagate scorer token usage through
ResultProcessorinto output itemproperties.metrics, with unit tests covering extraction + memory behavior.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/red_team/_foundry/_rai_scorer.py |
Adds token usage extraction + score metadata builder; persists scores to CentralMemory. |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/red_team/_result_processor.py |
Reads token usage from serialized score metadata and propagates it into output properties.metrics when eval metrics aren’t present. |
sdk/evaluation/azure-ai-evaluation/tests/unittests/test_redteam/test_foundry.py |
Adds unit tests for token usage extraction paths and fail-safe memory persistence. |
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/red_team/_foundry/_rai_scorer.py
Outdated
Show resolved
Hide resolved
Match against canonical and legacy metric name aliases when extracting token usage from result-level properties.metrics, consistent with how score extraction already handles aliases via _SYNC_TO_LEGACY_METRIC_NAMES and _LEGACY_TO_SYNC_METRIC_NAMES. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
nagkumar91
approved these changes
Mar 24, 2026
Member
nagkumar91
left a comment
There was a problem hiding this comment.
Clean, well-scoped PR. No issues found.
- Token extraction with two-level fallback (
sample.usage→result.properties.metrics) is robust ✅ - Memory save with graceful degradation on failure ✅
- String metadata handling for PyRIT serialization ✅
- Scorer token usage propagation as fallback when eval_row lacks metrics ✅
- 5 tests covering extraction, fallback, absent data, memory save, and memory failure ✅
LGTM.
slister1001
added a commit
that referenced
this pull request
Mar 24, 2026
…ry (#45865) * Extract RAI scorer token metrics into Score metadata and save to memory - Extract token usage (prompt_tokens, completion_tokens, total_tokens) from RAI service eval_result via sample.usage or result properties.metrics - Add token_usage to score_metadata dict in RAIServiceScorer - Save scores to PyRIT CentralMemory after creation (fail-safe) - Propagate scorer token_usage through ResultProcessor to output item properties.metrics for downstream aggregation - Add 5 unit tests covering token extraction, memory save, and error handling Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Use metric aliases in _extract_token_usage fallback Match against canonical and legacy metric name aliases when extracting token usage from result-level properties.metrics, consistent with how score extraction already handles aliases via _SYNC_TO_LEGACY_METRIC_NAMES and _LEGACY_TO_SYNC_METRIC_NAMES. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
slister1001
added a commit
that referenced
this pull request
Mar 24, 2026
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
slister1001
added a commit
to slister1001/azure-sdk-for-python
that referenced
this pull request
Mar 30, 2026
- Backport 1.16.2 hotfix CHANGELOG with release date (2026-03-24) - Add missing token metrics entry (PR Azure#45865) to 1.16.2 section - Add 1.16.3 (Unreleased) section with existing extra_headers feature - Bump _version.py to 1.16.3 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
6 tasks
slister1001
added a commit
that referenced
this pull request
Mar 31, 2026
- Backport 1.16.2 hotfix CHANGELOG with release date (2026-03-24) - Add missing token metrics entry (PR #45865) to 1.16.2 section - Add 1.16.3 (Unreleased) section with existing extra_headers feature - Bump _version.py to 1.16.3 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
slister1001
added a commit
that referenced
this pull request
Apr 1, 2026
- Backport 1.16.2 hotfix CHANGELOG with release date (2026-03-24) - Add missing token metrics entry (PR #45865) to 1.16.2 section - Add 1.16.3 (Unreleased) section with existing extra_headers feature - Bump _version.py to 1.16.3 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new API spec, a link to the pull request containing these API spec changes should be included above.
All SDK Contribution checklist:
General Guidelines and Best Practices
Testing Guidelines