Skip to content

Tool Call Accuracy V2 #41740

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

salma-elshafey
Copy link

@salma-elshafey salma-elshafey commented Jun 24, 2025

Description

This PR introduces a new version of the Tool Call Accuracy Evaluator with lower intra- and inter-model variance compared to V1.
It introduces:

  1. A new scoring rubric from 1-5, instead of the binary scoring rubric.
  2. Evaluation on all tool calls that happen in a single turn, instead of evaluation on each tool call separately.
  3. A new output format that contains more details regarding the tool calls that happen in a single turn, including excessive or missing tool calls made and errors that have been returned.

With V2, we achieved an improvement on human-alignment scores of 11% compared to V1, as shown in the table below:
image

All SDK Contribution checklist:

  • The pull request does not introduce [breaking changes]
  • CHANGELOG is updated for new features, bug fixes or other significant changes.
  • I have read the contribution guidelines.

General Guidelines and Best Practices

  • Title of the pull request is clear and informative.
  • There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.

Testing Guidelines

  • Pull request includes test coverage for the included changes.

@Copilot Copilot AI review requested due to automatic review settings June 24, 2025 09:58
@salma-elshafey salma-elshafey requested a review from a team as a code owner June 24, 2025 09:58
@github-actions github-actions bot added Community Contribution Community members are working on the issue customer-reported Issues that are reported by GitHub users external to the Azure organization. Evaluation Issues related to the client library for Azure AI Evaluation labels Jun 24, 2025
Copy link

Thank you for your contribution @salma-elshafey! We will review the pull request and get back to you soon.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the Tool Call Accuracy Evaluator to use a scoring rubric ranging from 1 to 5 instead of a binary score and evaluates all tool calls in a single turn collectively. Key changes include:

  • Transition from a binary scoring system (0/1) to a detailed 1–5 rubric.
  • Consolidation of tool call evaluations per turn with enhanced output details.
  • Updates to test cases, sample notebooks, and documentation to align with the new evaluation logic.

Reviewed Changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
sdk/evaluation/azure-ai-evaluation/tests/unittests/test_tool_call_accuracy_evaluator.py Updated unit tests to verify new scoring and output details.
sdk/evaluation/azure-ai-evaluation/tests/unittests/test_agent_evaluators.py Modified tests for missing input cases and tool definition validations.
sdk/evaluation/azure-ai-evaluation/samples/agent_evaluators/tool_call_accuracy.ipynb Revised sample to demonstrate updated evaluator usage and scoring.
sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluators/_tool_call_accuracy/_tool_call_accuracy.py Core evaluator logic modified to support the new scoring rubric and input handling.
sdk/evaluation/azure-ai-evaluation/CHANGELOG.md Changelog updated to reflect improvements to the evaluator.
Comments suppressed due to low confidence (1)

sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_evaluators/_tool_call_accuracy/_tool_call_accuracy.py:150

  • The current logic overrides a provided 'tool_calls' parameter with those parsed from 'response' when present, which may not align with the documented behavior; consider preserving the explicitly provided 'tool_calls' when both are supplied.
                tool_calls = parsed_tool_calls

The evaluator uses a scoring rubric of 1 to 5:
- Score 1: The tool calls are irrelevant
- Score 2: The tool calls are partially relevant, but not enough tools were called or the parameters were not correctly passed
- Score 3: The tool calls are relevant, but there were unncessary, excessive tool calls made
Copy link
Preview

Copilot AI Jun 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a spelling error in the description for score 3; 'unncessary' should be corrected to 'unnecessary'.

Suggested change
- Score 3: The tool calls are relevant, but there were unncessary, excessive tool calls made
- Score 3: The tool calls are relevant, but there were unnecessary, excessive tool calls made

Copilot uses AI. Check for mistakes.

@salma-elshafey
Copy link
Author

@salma-elshafey please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"

Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”), and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your contributions to Microsoft open source projects. This Agreement is effective as of the latest signature date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

@microsoft-github-policy-service agree [company="Microsoft"]

@salma-elshafey
Copy link
Author

@salma-elshafey the command you issued was incorrect. Please try again.

Examples are:

@microsoft-github-policy-service agree

and

@microsoft-github-policy-service agree company="your company"

@microsoft-github-policy-service agree company="Microsoft"

raise EvaluationException(
message="Tool call accuracy evaluator: Invalid score returned from LLM.",
if isinstance(llm_output, dict):
score = llm_output.get("tool_calls_success_level", None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a constant for it.

f"{self._result_key}_result": score_result,
f"{self._result_key}_threshold": self.threshold,
f"{self._result_key}_reason": reason,
'applicable': True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this field signify ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whether we ran evaluation on this turn or not. We don't run evaluations in the cases of:

  1. No tool calls happened in the turn.
  2. No tool definitions were provided in the turn.
  3. All/Some of the tool calls were of built-in tools.
    However, this can be deduced from the score field, whether it's an int value or "not applicable". Do you suggest we remove this one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we can deduce this from other fields please remove it.

Comment on lines 220 to 221
self._EXCESS_TOOL_CALLS_KEY: llm_output.get(self._EXCESS_TOOL_CALLS_KEY, {}),
self._MISSING_TOOL_CALLS_KEY: llm_output.get(self._MISSING_TOOL_CALLS_KEY, {}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any spec which defines what fields to be added ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, but these fields have been approved by the PM

f"{self._result_key}_result": 'pass',
f"{self._result_key}_threshold": self.threshold,
f"{self._result_key}_reason": error_message,
"applicable": False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this field

tool_results_map = {}
if isinstance(response, list):
for message in response:
print(message)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the print commands. If you would like to log, please use logger with correct log level

Your output should consist only of a JSON object, as provided in the examples, that has the following keys:
- chain_of_thought: a string that explains your thought process to decide on the tool call accuracy level. Start this string with 'Let's think step by step:', and think deeply and precisely about which level should be chosen based on the agent's tool calls and how they were able to address the user's query.
- tool_calls_success_level: a integer value between 1 and 5 that represents the level of tool call success, based on the level definitions mentioned before. You need to be very precise when deciding on this level. Ensure you are correctly following the rating system based on the description of each level.
- tool_calls_sucess_result: 'pass' or 'fail' based on the evaluation level of the tool call accuracy. Levels 1 and 2 are a 'fail', levels 3, 4 and 5 are a 'pass'.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be based on threshold customer passes ? Or spec defines this ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the evaluator code, we parse the score/level and generate the 'pass' and 'fail' based on the threshold defined by the user, whose default value is 3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is tool_calls_success_result used for ? How do we expose it to customers ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used it previously in the quality analysis but it's now unused in the SDK code, removed.

Comment on lines 139 to 147
- excess_tool_calls: a dictionary with the following keys:
- total: total number of excess, unnecessary tool calls made by the agent
- details: a list of dictionaries, each containing:
- tool_name: name of the tool
- excess_count: number of excess calls made for this query
- missing_tool_calls: a dictionary with the following keys:
- total: total number of missing tool calls that should have been made by the agent to be able to answer the query
- details: a list of dictionaries, each containing:
- tool_name: name of the tool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we provide any instructions on how to come up with excess_tool_calls or missing_tool_calls ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, but I believe the definitions in their subfields explain them.

"""Return a result indicating that the tool call is not applicable for evaluation.

pr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a typo ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, removed.

assert result[key] == "not applicable"
assert result[f"{key}_result"] == "not applicable"
assert result[key] == ToolCallAccuracyEvaluator._NOT_APPLICABLE_RESULT
assert result[f"{key}_result"] == "pass"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have result as pass if it was not applicable ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was agreed on with the PM.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@changliu2
Please confirm your approval on the above behavior.

- Score 1: The tool call is relevant with properly extracted parameters from the conversation
The evaluator uses a scoring rubric of 1 to 5:
- Score 1: The tool calls are irrelevant
- Score 2: The tool calls are partially relevant, but not enough tools were called or the parameters were not correctly passed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the description to include JTBD that were discuss in the tool accuracy doc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community Contribution Community members are working on the issue customer-reported Issues that are reported by GitHub users external to the Azure organization. Evaluation Issues related to the client library for Azure AI Evaluation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants