Skip to content

Support context manager protocol for traced iterator APIs #10576

Support context manager protocol for traced iterator APIs

Support context manager protocol for traced iterator APIs #10576

GitHub Actions / Executor E2E Test Result [migu/context-manager](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:migu/context-manager++) failed May 15, 2024 in 0s

3 fail, 6 skipped, 237 pass in 4m 51s

246 tests  ±0   237 ✅  - 1   4m 51s ⏱️ -13s
  1 suites ±0     6 💤 ±0 
  1 files   ±0     3 ❌ +1 

Results for commit ed54c05. ± Comparison against earlier commit 7789b89.

Annotations

Check warning on line 0 in tests.executor.e2etests.test_executor_happypath.TestExecutor

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [migu/context-manager](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:migu/context-manager++)

test_long_running_log (tests.executor.e2etests.test_executor_happypath.TestExecutor) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 6s]
Raw output
AssertionError: flow_logger should contain long running async tool log
assert None
 +  where None = <function match at 0x7f8171977e50>('.*.*Task async_passthrough has been running for 1 seconds, stacktrace:\\n.*async_passthrough\\.py.*in passthrough_str_and_wait\\n.*await asyncio.sleep\\(1\\).*tasks\\.py.*', 'Start executing nodes in async mode.\nUsing value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 1\nmonitor_long_running_coroutine started\ntask Task-18 has no start time, which should not happen\nStart to run 3 nodes with the current event loop.\nExecuting node async_passthrough. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough_93ba4819-ecc5-49c7-bf85-48308da33d10\n[async_passthrough in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough in line None (index starts from 0)] stdout> 0\ntask Task-20 has no start time, which should not happen\n[async_passthrough in line None (index starts from 0)] stdout> 1\n[async_passthrough in line None (index starts from 0)] stdout> 2\nTask async_passthrough has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\ntask Task-20 has no start time, which should not happen\nNode async_passthrough completes.\nExecuting node async_passthrough1. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough1_a8712550-83be-4819-af1c-569667ab0182\n[async_passthrough1 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough1 in line None (index starts from 0)] stdout> 0\nExecuting node async_passthrough2. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough2_ef114918-729b-46fb-88ec-a7b3d75f07f7\n[async_passthrough2 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough2 in line None (index starts from 0)] stdout> 0\ntask Task-23 has no start time, which should not happen\n[async_passthrough1 in line None (index starts from 0)] stdout> 1\n[async_passthrough2 in line None (index starts from 0)] stdout> 1\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\n[async_passthrough1 in line None (index starts from 0)] stdout> 2\n[async_passthrough2 in line None (index starts from 0)] stdout> 2\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nNode async_passthrough1 completes.\nNode async_passthrough2 completes.\n', re.DOTALL)
 +    where <function match at 0x7f8171977e50> = re.match
 +    and   'Start executing nodes in async mode.\nUsing value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 1\nmonitor_long_running_coroutine started\ntask Task-18 has no start time, which should not happen\nStart to run 3 nodes with the current event loop.\nExecuting node async_passthrough. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough_93ba4819-ecc5-49c7-bf85-48308da33d10\n[async_passthrough in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough in line None (index starts from 0)] stdout> 0\ntask Task-20 has no start time, which should not happen\n[async_passthrough in line None (index starts from 0)] stdout> 1\n[async_passthrough in line None (index starts from 0)] stdout> 2\nTask async_passthrough has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\ntask Task-20 has no start time, which should not happen\nNode async_passthrough completes.\nExecuting node async_passthrough1. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough1_a8712550-83be-4819-af1c-569667ab0182\n[async_passthrough1 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough1 in line None (index starts from 0)] stdout> 0\nExecuting node async_passthrough2. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough2_ef114918-729b-46fb-88ec-a7b3d75f07f7\n[async_passthrough2 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough2 in line None (index starts from 0)] stdout> 0\ntask Task-23 has no start time, which should not happen\n[async_passthrough1 in line None (index starts from 0)] stdout> 1\n[async_passthrough2 in line None (index starts from 0)] stdout> 1\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\n[async_passthrough1 in line None (index starts from 0)] stdout> 2\n[async_passthrough2 in line None (index starts from 0)] stdout> 2\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nNode async_passthrough1 completes.\nNode async_passthrough2 completes.\n' = CaptureResult(out='Start executing nodes in async mode.\nUsing value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 1\nmonitor_long_running_coroutine started\ntask Task-18 has no start time, which should not happen\nStart to run 3 nodes with the current event loop.\nExecuting node async_passthrough. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough_93ba4819-ecc5-49c7-bf85-48308da33d10\n[async_passthrough in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough in line None (index starts from 0)] stdout> 0\ntask Task-20 has no start time, which should not happen\n[async_passthrough in line None (index starts from 0)] stdout> 1\n[async_passthrough in line None (index starts from 0)] stdout> 2\nTask async_passthrough has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\ntask Task-20 has no start time, which should not happen\nNode async_passthrough completes.\nExecuting node async_passthrough1. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough1_a8712550-83be-4819-af1c-569667ab0182\n[async_passthrough1 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough1 in line None (index starts from 0)] stdout> 0\nExecuting node async_passthrough2. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough2_ef114918-729b-46fb-88ec-a7b3d75f07f7\n[async_passthrough2 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough2 in line None (index starts from 0)] stdout> 0\ntask Task-23 has no start time, which should not happen\n[async_passthrough1 in line None (index starts from 0)] stdout> 1\n[async_passthrough2 in line None (index starts from 0)] stdout> 1\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\n[async_passthrough1 in line None (index starts from 0)] stdout> 2\n[async_passthrough2 in line None (index starts from 0)] stdout> 2\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nNode async_passthrough1 completes.\nNode async_passthrough2 completes.\n', err='').out
 +    and   re.DOTALL = re.DOTALL
self = <executor.e2etests.test_executor_happypath.TestExecutor object at 0x7f8164870a90>
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'c2881c848bf048e9b3198a2a64464ef3', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}
capsys = <_pytest.capture.CaptureFixture object at 0x7f8164562d00>

    def test_long_running_log(self, dev_connections, capsys):
        # TODO: investigate why flow_logger does not output to stdout in test case
        from promptflow._utils.logger_utils import flow_logger
    
        flow_logger.addHandler(logging.StreamHandler(sys.stdout))
    
        # Test long running tasks with log
        os.environ["PF_LONG_RUNNING_LOGGING_INTERVAL"] = "1"
        executor = FlowExecutor.create(get_yaml_file("async_tools"), dev_connections)
        executor.exec_line(self.get_line_inputs())
        captured = capsys.readouterr()
        expected_long_running_str_1 = r".*.*Task async_passthrough has been running for 1 seconds, stacktrace:\n.*async_passthrough\.py.*in passthrough_str_and_wait\n.*await asyncio.sleep\(1\).*tasks\.py.*"  # noqa E501
>       assert re.match(
            expected_long_running_str_1, captured.out, re.DOTALL
        ), "flow_logger should contain long running async tool log"
E       AssertionError: flow_logger should contain long running async tool log
E       assert None
E        +  where None = <function match at 0x7f8171977e50>('.*.*Task async_passthrough has been running for 1 seconds, stacktrace:\\n.*async_passthrough\\.py.*in passthrough_str_and_wait\\n.*await asyncio.sleep\\(1\\).*tasks\\.py.*', 'Start executing nodes in async mode.\nUsing value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 1\nmonitor_long_running_coroutine started\ntask Task-18 has no start time, which should not happen\nStart to run 3 nodes with the current event loop.\nExecuting node async_passthrough. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough_93ba4819-ecc5-49c7-bf85-48308da33d10\n[async_passthrough in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough in line None (index starts from 0)] stdout> 0\ntask Task-20 has no start time, which should not happen\n[async_passthrough in line None (index starts from 0)] stdout> 1\n[async_passthrough in line None (index starts from 0)] stdout> 2\nTask async_passthrough has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\ntask Task-20 has no start time, which should not happen\nNode async_passthrough completes.\nExecuting node async_passthrough1. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough1_a8712550-83be-4819-af1c-569667ab0182\n[async_passthrough1 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough1 in line None (index starts from 0)] stdout> 0\nExecuting node async_passthrough2. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough2_ef114918-729b-46fb-88ec-a7b3d75f07f7\n[async_passthrough2 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough2 in line None (index starts from 0)] stdout> 0\ntask Task-23 has no start time, which should not happen\n[async_passthrough1 in line None (index starts from 0)] stdout> 1\n[async_passthrough2 in line None (index starts from 0)] stdout> 1\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\n[async_passthrough1 in line None (index starts from 0)] stdout> 2\n[async_passthrough2 in line None (index starts from 0)] stdout> 2\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nNode async_passthrough1 completes.\nNode async_passthrough2 completes.\n', re.DOTALL)
E        +    where <function match at 0x7f8171977e50> = re.match
E        +    and   'Start executing nodes in async mode.\nUsing value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 1\nmonitor_long_running_coroutine started\ntask Task-18 has no start time, which should not happen\nStart to run 3 nodes with the current event loop.\nExecuting node async_passthrough. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough_93ba4819-ecc5-49c7-bf85-48308da33d10\n[async_passthrough in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough in line None (index starts from 0)] stdout> 0\ntask Task-20 has no start time, which should not happen\n[async_passthrough in line None (index starts from 0)] stdout> 1\n[async_passthrough in line None (index starts from 0)] stdout> 2\nTask async_passthrough has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\ntask Task-20 has no start time, which should not happen\nNode async_passthrough completes.\nExecuting node async_passthrough1. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough1_a8712550-83be-4819-af1c-569667ab0182\n[async_passthrough1 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough1 in line None (index starts from 0)] stdout> 0\nExecuting node async_passthrough2. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough2_ef114918-729b-46fb-88ec-a7b3d75f07f7\n[async_passthrough2 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough2 in line None (index starts from 0)] stdout> 0\ntask Task-23 has no start time, which should not happen\n[async_passthrough1 in line None (index starts from 0)] stdout> 1\n[async_passthrough2 in line None (index starts from 0)] stdout> 1\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\n[async_passthrough1 in line None (index starts from 0)] stdout> 2\n[async_passthrough2 in line None (index starts from 0)] stdout> 2\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nNode async_passthrough1 completes.\nNode async_passthrough2 completes.\n' = CaptureResult(out='Start executing nodes in async mode.\nUsing value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 1\nmonitor_long_running_coroutine started\ntask Task-18 has no start time, which should not happen\nStart to run 3 nodes with the current event loop.\nExecuting node async_passthrough. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough_93ba4819-ecc5-49c7-bf85-48308da33d10\n[async_passthrough in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough in line None (index starts from 0)] stdout> 0\ntask Task-20 has no start time, which should not happen\n[async_passthrough in line None (index starts from 0)] stdout> 1\n[async_passthrough in line None (index starts from 0)] stdout> 2\nTask async_passthrough has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\ntask Task-20 has no start time, which should not happen\nNode async_passthrough completes.\nExecuting node async_passthrough1. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough1_a8712550-83be-4819-af1c-569667ab0182\n[async_passthrough1 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough1 in line None (index starts from 0)] stdout> 0\nExecuting node async_passthrough2. node run id: 468c7a26-3339-4316-b817-acc092e403c7_async_passthrough2_ef114918-729b-46fb-88ec-a7b3d75f07f7\n[async_passthrough2 in line None (index starts from 0)] stdout> Wait for 3 seconds in async function\n[async_passthrough2 in line None (index starts from 0)] stdout> 0\ntask Task-23 has no start time, which should not happen\n[async_passthrough1 in line None (index starts from 0)] stdout> 1\n[async_passthrough2 in line None (index starts from 0)] stdout> 1\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 1 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\n[async_passthrough1 in line None (index starts from 0)] stdout> 2\n[async_passthrough2 in line None (index starts from 0)] stdout> 2\ntask Task-23 has no start time, which should not happen\nTask async_passthrough2 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nTask async_passthrough1 has been running for 2 seconds, stacktrace:\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 482, in wrapped\n    output = await func(*args, **kwargs)\n  File "/home/runner/work/promptflow/promptflow/src/promptflow/tests/test_configs/flows/async_tools/async_passthrough.py", line 11, in passthrough_str_and_wait\n    await asyncio.sleep(1)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/tasks.py", line 652, in sleep\n    return await future\n\nNode async_passthrough1 completes.\nNode async_passthrough2 completes.\n', err='').out
E        +    and   re.DOTALL = re.DOTALL

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_executor_happypath.py:107: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_logs.TestExecutorLogs

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [migu/context-manager](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:migu/context-manager++)

test_executor_logs[print_input_flow] (tests.executor.e2etests.test_logs.TestExecutorLogs) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 0s]
Raw output
AssertionError: assert 6 == 53
 +  where 53 = count_lines('/tmp/tmpypd5jx78/test_flow_run.log')
self = <executor.e2etests.test_logs.TestExecutorLogs object at 0x7f816473f760>
folder_name = 'print_input_flow'

    @pytest.mark.parametrize(
        "folder_name",
        TEST_LOGS_FLOW,
    )
    def test_executor_logs(self, folder_name):
        logs_directory = Path(mkdtemp())
        flow_run_log_path = str(logs_directory / "test_flow_run.log")
        bulk_run_log_path = str(logs_directory / "test_bulk_run.log")
    
        # flow run: test exec_line
        with LogContext(flow_run_log_path):
            executor = FlowExecutor.create(get_yaml_file(folder_name), {})
            executor.exec_line({"text": "line_text"})
            log_content = load_content(flow_run_log_path)
            loggers_name_list = ["execution", "execution.flow"]
            assert all(logger in log_content for logger in loggers_name_list)
>           assert 6 == count_lines(flow_run_log_path)
E           AssertionError: assert 6 == 53
E            +  where 53 = count_lines('/tmp/tmpypd5jx78/test_flow_run.log')

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_logs.py:99: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_logs.TestExecutorLogs

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [migu/context-manager](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:migu/context-manager++)

test_long_run_log (tests.executor.e2etests.test_logs.TestExecutorLogs) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 1m 1s]
Raw output
AssertionError: Got 52 lines in /tmp/tmpf40ghcne/flow.log, expected 14.
assert 52 == 14
 +  where 52 = len(['2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Start executing nodes in thread pool mode.\n', '2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Start to run 1 nodes with concurrency level 16.\n', '2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Executing node long_run_node. node run id: fdb1b3aa-cbda-4baa-94fb-68ff2d3dbafd_long_run_node_0\n', '2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Using value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 60\n', '2024-05-15 12:58:18 +0000    2535 execution          WARNING  [long_run_node in line 0 (index starts from 0)] stderr> --- Logging error ---\n', '2024-05-15 12:58:18 +0000    2535 execution          WARNING  [long_run_node in line 0 (index starts from 0)] stderr> Traceback (most recent call last):\n', ...])
 +  and   14 = len(['INFO     Start executing nodes in thread pool mode.', 'INFO     Start to run 1 nodes with concurrency level 16.', 'INFO     Executing node long_run_node.', 'INFO     Using value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable', 'WARNING  long_run_node in line 0 has been running for 60 seconds, stacktrace of thread', 'in wrapped', ...])
self = <executor.e2etests.test_logs.TestExecutorLogs object at 0x7f8164756580>

    def test_long_run_log(self):
        # Test long running tasks with log
        os.environ["PF_LONG_RUNNING_LOGGING_INTERVAL"] = "60"
        target_texts = [
            "INFO     Start executing nodes in thread pool mode.",
            "INFO     Start to run 1 nodes with concurrency level 16.",
            "INFO     Executing node long_run_node.",
            "INFO     Using value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable",
            "WARNING  long_run_node in line 0 has been running for 60 seconds, stacktrace of thread",
            "in wrapped",
            "output = func(*args, **kwargs)",
            ", line 16, in long_run_func",
            "return f2()",
            ", line 11, in f2",
            "return f1()",
            ", line 6, in f1",
            "time.sleep(61)",
            "INFO     Node long_run_node completes.",
        ]
>       self.assert_long_run_log(target_texts)

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_logs.py:187: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <executor.e2etests.test_logs.TestExecutorLogs object at 0x7f8164756580>
target_texts = ['INFO     Start executing nodes in thread pool mode.', 'INFO     Start to run 1 nodes with concurrency level 16.', 'I...variable', 'WARNING  long_run_node in line 0 has been running for 60 seconds, stacktrace of thread', 'in wrapped', ...]

    def assert_long_run_log(self, target_texts):
        executor = FlowExecutor.create(get_yaml_file("long_run"), {})
        file_path = Path(mkdtemp()) / "flow.log"
        with LogContext(file_path):
            flow_result = executor.exec_line({}, index=0)
        node_run = flow_result.node_run_infos["long_run_node"]
        assert node_run.status == Status.Completed
        with open(file_path) as fin:
            lines = fin.readlines()
        lines = [line for line in lines if line.strip()]
        msg = f"Got {len(lines)} lines in {file_path}, expected {len(target_texts)}."
>       assert len(lines) == len(target_texts), msg
E       AssertionError: Got 52 lines in /tmp/tmpf40ghcne/flow.log, expected 14.
E       assert 52 == 14
E        +  where 52 = len(['2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Start executing nodes in thread pool mode.\n', '2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Start to run 1 nodes with concurrency level 16.\n', '2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Executing node long_run_node. node run id: fdb1b3aa-cbda-4baa-94fb-68ff2d3dbafd_long_run_node_0\n', '2024-05-15 12:58:18 +0000    2535 execution.flow     INFO     Using value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable as logging interval: 60\n', '2024-05-15 12:58:18 +0000    2535 execution          WARNING  [long_run_node in line 0 (index starts from 0)] stderr> --- Logging error ---\n', '2024-05-15 12:58:18 +0000    2535 execution          WARNING  [long_run_node in line 0 (index starts from 0)] stderr> Traceback (most recent call last):\n', ...])
E        +  and   14 = len(['INFO     Start executing nodes in thread pool mode.', 'INFO     Start to run 1 nodes with concurrency level 16.', 'INFO     Executing node long_run_node.', 'INFO     Using value of PF_LONG_RUNNING_LOGGING_INTERVAL in environment variable', 'WARNING  long_run_node in line 0 has been running for 60 seconds, stacktrace of thread', 'in wrapped', ...])

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_logs.py:210: AssertionError

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [migu/context-manager](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:migu/context-manager++)

6 skipped tests found

There are 6 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_package_tool_with_conn[assistant-with-package-tool]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_tool_with_connection[assistant-tool-with-connection-line_input0]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_with_image[food-calorie-assistant-line_input0]
tests.executor.e2etests.test_execution_server.TestExecutionServer ‑ test_execution_flow_with_nan_inf
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[package_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[package_tools-search_by_text-flow_inputs5-None]

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [migu/context-manager](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:migu/context-manager++)

246 tests found

There are 246 tests, see "Raw output" for the full list of tests.
Raw output
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_aggregate_bypassed_nodes
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_all_nodes_bypassed
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_batch_run_activate
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[activate_condition_always_met]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[activate_with_no_inputs]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[all_depedencies_bypassed_with_activate_met]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[conditional_flow_with_activate]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_invalid_activate_config
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_package_tool_with_conn[assistant-with-package-tool]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_tool_with_connection[assistant-tool-with-connection-line_input0]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_with_image[food-calorie-assistant-line_input0]
tests.executor.e2etests.test_async.TestAsync ‑ test_exec_line_async[async_tools-expected_result0]
tests.executor.e2etests.test_async.TestAsync ‑ test_exec_line_async[async_tools_with_sync_tools-expected_result1]
tests.executor.e2etests.test_async.TestAsync ‑ test_executor_node_concurrency[async_tools-concurrency_levels0-expected_concurrency0]
tests.executor.e2etests.test_async.TestAsync ‑ test_executor_node_concurrency[async_tools_with_sync_tools-concurrency_levels1-expected_concurrency1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume[web_classification-web_classification_default_20240207_165606_643000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume_aggregation[classification_accuracy_evaluation-classification_accuracy_evaluation_default_20240208_152402_694000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume_aggregation_with_image[eval_flow_with_image_resume-eval_flow_with_image_resume_default_20240305_111258_103000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_failure[connection_as_input-input_mapping0-InputNotFound-The input for flow cannot be empty in batch mode. Please review your flow and provide valid inputs.]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_failure[script_with___file__-input_mapping1-EmptyInputsData-Couldn't find any inputs data at the given input paths. Please review the provided path and consider resubmitting.]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_in_existing_loop
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input0-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input1-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input2-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_then_eval
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_with_aggregation_failure
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_storage
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_default_input
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_line_number
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_metrics
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_openai_metrics
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_partial_failure
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs_using_default_value.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_early_stop[chat_group/cloud_batch_runs/chat_group_copilot-chat_group/cloud_batch_runs/chat_group_simulation_error-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_early_stop[chat_group/cloud_batch_runs/chat_group_simulation_error-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_multi_inputs[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-simulation_input.json-copilot_input.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_stop_signal[chat_group/cloud_batch_runs/chat_group_simulation_stop_signal-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_server.TestBatchServer ‑ test_batch_run_with_basic_flow
tests.executor.e2etests.test_batch_server.TestBatchServer ‑ test_batch_run_with_image_flow
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_timeout[one_line_of_bulktest_timeout-3-600-Line 2 execution timeout for exceeding 3 seconds-Status.Completed]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_timeout[one_line_of_bulktest_timeout-600-5-Line 2 execution timeout for exceeding-Status.Failed]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_with_line_timeout[one_line_of_bulktest_timeout]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_with_one_line_timeout[one_line_of_bulktest_timeout]
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_concurrent_run
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_concurrent_run_with_exception
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_linear_run
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_cancel
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_execution_error
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_validation_error
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[basic_callable_class-inputs_mapping2-<lambda>-init_kwargs2]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[callable_class_with_primitive-inputs_mapping3-<lambda>-init_kwargs3]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[dummy_flow_with_trace-inputs_mapping0-<lambda>-None]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[flow_with_dataclass_output-inputs_mapping1-<lambda>-None]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_callable_entry
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_init_multiple_workers[1-<lambda>]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_init_multiple_workers[2-<lambda>]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_invalid_case
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_openai
tests.executor.e2etests.test_execution_server.TestExecutionServer ‑ test_execution_flow_with_nan_inf
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail_with_exception[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail_with_exception[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_node_fail[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_node_fail[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_chat_flow_stream_mode
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_convert_flow_input_types[simple_flow_with_python_tool]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_execute_flow[output-intermediate-True-2]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_execute_flow[output_1-intermediate_1-False-1]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_creation_with_default_input
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_creation_with_default_variants[web_classification]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[async_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[async_tools_with_sync_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[connection_as_input]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[package_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[prompt_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[script_with___file__]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[script_with_import]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[tool_with_assistant_definition]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[web_classification_no_variants]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[connection_as_input-conn_node-None-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[package_tools-search_by_text-flow_inputs5-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[prompt_tools-summarize_text_content_prompt-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node1-flow_inputs2-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node2-None-dependency_nodes_outputs3]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node3-None-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with_import-node1-flow_inputs8-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[simple_aggregation-accuracy-flow_inputs7-dependency_nodes_outputs7]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[web_classification_no_variants-summarize_text_content-flow_inputs0-dependency_nodes_outputs0]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node_with_llm_node
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_for_script_tool_with_init
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_node_overrides
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_flow_with_no_inputs_and_output[no_inputs_outputs]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_long_running_log
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_input_type_invalid[simple_flow_with_python_tool-inputs_mapping0-The input for flow is incorrect. The value for flow input 'num' in line 0 of input data does not match the expected type 'int'. Please change flow input type or adjust the input value in your input data.-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input0-True-Exception]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input1-False-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input2-True-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input3-False-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type[source_file_missing-flow.dag.python.yaml-ResolveToolError-InvalidSource]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_input_reference_invalid-flow.dag.yaml-InputReferenceNotFound-None-Invalid node definitions found in the flow graph. Node 'divide_num' references flow input 'num_1' which is not defined in your flow. To resolve this issue, please review your flow, ensuring that you either add the missing flow inputs or adjust node reference to the correct flow input.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_llm_with_wrong_conn-flow.dag.yaml-ResolveToolError-InvalidConnectionType-Tool load failed in 'wrong_llm': (InvalidConnectionType) Connection type CustomConnection is not supported for LLM.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_output_reference_invalid-flow.dag.yaml-EmptyOutputReference-None-The output 'content' for flow is incorrect. The reference is not specified for the output 'content' in the flow. To rectify this, ensure that you accurately specify the reference in the flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[node_circular_dependency-flow.dag.yaml-NodeCircularDependency-None-Invalid node definitions found in the flow graph. Node circular dependency has been detected among the nodes in your flow. Kindly review the reference relationships for the nodes ['divide_num', 'divide_num_1', 'divide_num_2'] and resolve the circular reference issue in the flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[node_reference_not_found-flow.dag.yaml-NodeReferenceNotFound-None-Invalid node definitions found in the flow graph. Node 'divide_num_2' references a non-existent node 'divide_num_3' in your flow. Please review your flow to ensure that the node name is accurately specified.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[nodes_names_duplicated-flow.dag.yaml-DuplicateNodeName-None-Invalid node definitions found in the flow graph. Node with name 'stringify_num' appears more than once in the node definitions in your flow, which is not allowed. To address this issue, please review your flow and either rename or remove nodes with identical names.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[outputs_reference_not_valid-flow.dag.yaml-OutputReferenceNotFound-None-The output 'content' for flow is incorrect. The output 'content' references non-existent node 'another_stringify_num' in your flow. To resolve this issue, please carefully review your flow and correct the reference definition for the output in question.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[outputs_with_invalid_flow_inputs_ref-flow.dag.yaml-OutputReferenceNotFound-None-The output 'num' for flow is incorrect. The output 'num' references non-existent flow input 'num11' in your flow. Please carefully review your flow and correct the reference definition for the output in question.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[source_file_missing-flow.dag.jinja.yaml-ResolveToolError-InvalidSource-Tool load failed in 'summarize_text_content': (InvalidSource) Node source path 'summarize_text_content__variant_1.jinja2' is invalid on node 'summarize_text_content'.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_execution_errors[flow_output_unserializable-line_input0-FlowOutputUnserializable-The output 'content' for flow is incorrect. The output value is not JSON serializable. JSON dump failed: (TypeError) Object of type UnserializableClass is not JSON serializable. Please verify your flow output and make sure the value serializable.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[python_tool_with_simple_image_without_default-line_input2-InputNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[simple_flow_with_python_tool-line_input0-InputNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[simple_flow_with_python_tool-line_input1-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_with_duplicated_inputs[llm_tool_with_duplicated_inputs-Invalid inputs {'prompt'} in prompt template of node llm_tool_with_duplicated_inputs. These inputs are duplicated with the parameters of AzureOpenAI.completion.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_with_duplicated_inputs[prompt_tool_with_duplicated_inputs-Invalid inputs {'template'} in prompt template of node prompt_tool_with_duplicated_inputs. These inputs are duplicated with the reserved parameters of prompt tool.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[invalid_connection-ResolveToolError-GetConnectionError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[tool_type_missing-ResolveToolError-NotImplementedError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[wrong_api-ResolveToolError-APINotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[wrong_module-FailedToImportModule-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_run_inputs_should_not_saved_to_run_info
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_node_topology_in_order[web_classification_no_variants-web_classification_no_variants_unordered]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root0-simple_flow_with_python_tool-divide_num-line_input0-InputNotFound-The input for node is incorrect. Node input 'num' is not found in input data for node 'divide_num'. Please verify the inputs data for the node.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root1-simple_flow_with_python_tool-divide_num-line_input1-InputTypeError-The input for node is incorrect. Value for input 'num' of node 'divide_num' is not type 'int'. Please review and rectify the input data.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root2-flow_input_reference_invalid-divide_num-line_input2-InputNotFound-The input for node is incorrect. Node input 'num_1' is not found from flow inputs of node 'divide_num'. Please review the node definition in your flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root3-simple_flow_with_python_tool-bad_node_name-line_input3-SingleNodeValidationError-Validation failed when attempting to execute the node. Node 'bad_node_name' is not found in flow 'flow.dag.yaml'. Please change node name or correct the flow file.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root4-node_missing_type_or_source-divide_num-line_input4-SingleNodeValidationError-Validation failed when attempting to execute the node. Properties 'source' or 'type' are not specified for Node 'divide_num' in flow 'flow.dag.yaml'. Please make sure these properties are in place and try again.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_valid_flow_run_inpust_should_saved_to_run_info
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[chat_flow_with_image-input_dirs3-inputs_mapping3-answer-2-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[eval_flow_with_composite_image-input_dirs5-inputs_mapping5-output-2-True]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[eval_flow_with_simple_image-input_dirs4-inputs_mapping4-output-2-True]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_composite_image-input_dirs2-inputs_mapping2-output-2-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_simple_image-input_dirs0-inputs_mapping0-output-4-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_simple_image_with_default-input_dirs1-inputs_mapping1-output-4-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_run_then_eval_with_image
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs7]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs8]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[chat_flow_with_image-inputs9]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs7]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs8]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_image_nested_api_calls-inputs10]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node-flow_inputs2-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node_2-flow_inputs3-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node_3-flow_inputs4-dependency_nodes_outputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_simple_image-python_node-flow_inputs0-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_simple_image-python_node_2-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[None-False-.]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[None-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-False-test_path]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_invalid_default_value[python_tool_with_invalid_default_value-python_node_2-flow_inputs0-dependency_nodes_outputs0]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_batch_engine_with_image[chat_flow_with_openai_vision_image-input_dirs1-inputs_mapping1-answer-2]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_batch_engine_with_image[python_tool_with_openai_vision_image-input_dirs0-inputs_mapping0-output-4]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[chat_flow_with_openai_vision_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image[python_tool_with_openai_vision_image-python_node-flow_inputs0-None]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image[python_tool_with_openai_vision_image-python_node_2-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[None-False-.]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[None-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-False-test_path]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-True-test_storage]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[flow_with_langchain_traces-inputs_mapping0]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[openai_chat_api_flow-inputs_mapping1]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[openai_completion_api_flow-inputs_mapping2]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_activate_config_log
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_async_log_in_worker_thread
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_batch_run_flow_logs[flow_root_dir0-print_input_flow-8]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_batch_run_flow_logs[flow_root_dir1-print_input_flex-2]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_change_log_format
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_executor_logs[print_input_flow]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_log_progress[simple_flow_with_ten_inputs-inputs_mapping0]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_long_run_log
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_node_logs[print_input_flow]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_node_logs_in_executor_logs[print_input_flow]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_custom_llm_tool_with_duplicated_inputs
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_executor_package_tool_with_conn
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_executor_package_with_prompt_tool
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_execution[wrong_package_in_package_tools-ResolveToolError-PackageToolNotFoundError-Tool load failed in 'search_by_text': (PackageToolNotFoundError) Package tool 'promptflow.tools.serpapi11.SerpAPI.search' is not found in the current environment. All available package tools are: ['promptflow.tools.azure_content_safety.AzureContentSafety.analyze_text', 'promptflow.tools.azure_detect.AzureDetect.get_language'].]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_execution[wrong_tool_in_package_tools-ResolveToolError-PackageToolNotFoundError-Tool load failed in 'search_by_text': (PackageToolNotFoundError) Package tool 'promptflow.tools.serpapi.SerpAPI.search_11' is not found in the current environment. All available package tools are: ['promptflow.tools.azure_content_safety.AzureContentSafety.analyze_text', 'promptflow.tools.azure_detect.AzureDetect.get_language'].]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_load_error[tool_with_init_error-Tool load failed in 'tool_with_init_error': (ToolLoadError) Failed to load package tool 'Tool with init error': (Exception) Tool load error.]
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_dynamic_list
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_enabled_by_value
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_generated_by
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_dynamic_list
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_enabled_by
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_icon
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_schema
tests.executor.e2etests.test_telemetry.TestExecutorTelemetry ‑ test_executor_openai_telemetry
tests.executor.e2etests.test_telemetry.TestExecutorTelemetry ‑ test_executor_openai_telemetry_with_batch_run
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_generator_tools
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[llm_tool-inputs4]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[llm_tool-inputs5]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_chat_api_flow-inputs0]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_chat_api_flow-inputs1]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_completion_api_flow-inputs2]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_completion_api_flow-inputs3]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_flow_with_trace[flow_with_trace]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_flow_with_trace[flow_with_trace_async]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_trace_behavior_with_generator_node[False]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_trace_behavior_with_generator_node[True]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_flow_with_nested_tool
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_flow_with_traced_function
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace[flow_with_trace-inputs0-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace[flow_with_trace_async-inputs1-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_batch
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_embedding[openai_embedding_api_flow-inputs0-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_embedding[openai_embedding_api_flow_with_token-inputs1-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[flow_with_async_llm_tasks-inputs5-False-6]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[llm_tool-inputs4-False-4]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_chat_api_flow-inputs0-False-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_chat_api_flow-inputs1-True-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_completion_api_flow-inputs2-False-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_completion_api_flow-inputs3-True-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_prompt[llm_tool-inputs0-joke.jinja2]