Howie/sample test report#45208
Merged
howieleung merged 6 commits intofeature/azure-ai-projects/2.0.0b4from Feb 18, 2026
Merged
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This pull request adds optional logging functionality to the sample test executor framework for the azure-ai-projects SDK. The primary purpose is to provide better debugging capabilities by capturing sample execution output (print statements) to log files when tests fail, pass, or encounter validation errors.
Changes:
- Added two new logging methods (
_write_error_logand_write_success_log) that conditionally write captured output to temp directory files based on environment variables - Enhanced exception handling in
executeandexecute_asyncmethods to log output before re-raising exceptions - Updated validation result assertion to log both successful and failed validation results
- Added environment variable templates for configuring log file naming patterns
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| sdk/ai/azure-ai-projects/tests/samples/sample_executor.py | Implements core logging infrastructure with two new methods for writing error/success logs, integrates logging into execution and validation workflows |
| sdk/ai/azure-ai-projects/samples/agents/tools/sample_agent_azure_function.py | Minor formatting fix removing an extraneous blank line |
| sdk/ai/azure-ai-projects/.env.template | Documents the new environment variables for configuring log file names |
API Change CheckAPIView identified API level changes in this PR and created the following API reviews |
…ndling and clarity
dargilco
approved these changes
Feb 17, 2026
…dation failure handling
Member
Author
|
/azp run python - pullrequest |
|
Azure Pipelines successfully started running 1 pipeline(s). |
dargilco
approved these changes
Feb 18, 2026
102f164
into
feature/azure-ai-projects/2.0.0b4
20 checks passed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Always create reports for sample test during live mode. In .env.template, I recommend report name to have the word, passed, failed, or errors.
OpenAPI client write url, request and response by print statement that had been captured into the report. On the other hand, our emitted code write url, request and response content into logger that wasn't captured into the report. Now I capture them.
No matter how hard I fine tune the LLM instruction, LLM still thinks couple of samples have improper outputs although the outputs are valid. So I add feature to add them as exception and let the test pass.