Skip to content

Client code update #41782

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 25 commits into from
Jun 30, 2025
Merged

Client code update #41782

merged 25 commits into from
Jun 30, 2025

Conversation

w-javed
Copy link
Contributor

@w-javed w-javed commented Jun 25, 2025

No description provided.

@Copilot Copilot AI review requested due to automatic review settings June 25, 2025 22:24
@w-javed w-javed requested a review from a team as a code owner June 25, 2025 22:24
@github-actions github-actions bot added the Evaluation Issues related to the client library for Azure AI Evaluation label Jun 25, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the client code to support new “agent evaluation” and “with credentials” operations, refactors import paths, and standardizes return types and request headers.

  • Refactored import locations for JSON encoder and (de)serializer utilities under _utils
  • Added new builders and operations for agent evaluation, connection credentials, and red team polling
  • Updated method signatures to include client_request_id, adjusted HTTP status checks to expect 201 for create calls, and switched paged return types to ItemPaged/AsyncItemPaged

Reviewed Changes

Copilot reviewed 15 out of 24 changed files in this pull request and generated no comments.

Show a summary per file
File Description
sdk/evaluation/.../operations/_operations.py Core sync operation builders updated and new methods added
sdk/evaluation/.../operations/init.py Removed ServicePatternsOperations export
sdk/evaluation/.../models/_models.py Switched to _utils.model_base.Model, added agent models
sdk/evaluation/.../models/_enums.py Extended enums (chat roles, finish reasons, attack types)
sdk/evaluation/.../aio/operations/_operations.py Async operations updated similarly to sync
sdk/evaluation/.../aio/_configuration.py Simplified credential initialization logic
sdk/evaluation/.../aio/_client.py Cleaned up client constructor and removed deprecated fields
Comments suppressed due to low confidence (4)

sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_common/onedp/operations/_operations.py:133

  • Consider adding unit tests for build_connections_list_with_credentials_request to verify that the request method, URL, and query parameters serialize correctly.
def build_connections_list_with_credentials_request(  # pylint: disable=name-too-long

sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_common/onedp/operations/_operations.py:93

  • The HTTP method for build_connections_get_with_credentials_request was changed from GET to POST, which will likely break the expected GET endpoint. It should remain GET unless the service contract was intentionally updated.
    return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs)

sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_common/onedp/operations/_operations.py:142

  • case_insensitive_dict is used here but not imported in this module; add from azure.core.utils import case_insensitive_dict to avoid a NameError.
    _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})

sdk/evaluation/azure-ai-evaluation/azure/ai/evaluation/_common/onedp/models/_enums.py:46

  • [nitpick] The enum member name ANSII_ATTACK is misspelled; the abbreviation should be ANSI, so update the name and value to ANSI_ATTACK.
    ANSII_ATTACK = "ansii_attack"

@nagkumar91 nagkumar91 merged commit 99918d3 into Azure:main Jun 30, 2025
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Evaluation Issues related to the client library for Azure AI Evaluation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants