Skip to content
GitHub Actions / Executor E2E Test Result [users/singankit/pf_eval_deps](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:users/singankit/pf_eval_deps++) failed May 8, 2024 in 0s. View latest attempt.

3 fail, 5 skipped, 234 pass in 5m 12s

242 tests  ±0   234 ✅ +1   5m 12s ⏱️ -1s
  1 suites ±0     5 💤 ±0 
  1 files   ±0     3 ❌  - 1 

Results for commit 15206d0. ± Comparison against earlier commit 379ccbc.

Annotations

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [users/singankit/pf_eval_deps](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:users/singankit/pf_eval_deps++)

test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 10s]
Raw output
assert 0 == 3
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=0, failed_lines=3, node_status={'prompt.completed': 3, 'llm.failed': 3}, start_time=datetime.datetime(2024, 5, 8, 21, 42, 31, 846939), end_time=datetime.datetime(2024, 5, 8, 21, 42, 38, 872806), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.025867), error_summary=ErrorSummary(failed_user_error_lines=3, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'aa0cd649b8f330960f68937288e4a397a9886565'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=1, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'5de757029c418c7071d28d5d76927f05646298cc'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=2, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'0589b9d6062d9773c6214b3ed87057dcae5c3836'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7fade1f9c880>
simulation_flow = 'chat_group/cloud_batch_runs/chat_group_simulation'
copilot_flow = 'chat_group/cloud_batch_runs/chat_group_copilot', max_turn = 5
input_file_name = 'inputs.json'
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'c2881c848bf048e9b3198a2a64464ef3', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "simulation_flow, copilot_flow, max_turn, input_file_name",
        [
            (
                "chat_group/cloud_batch_runs/chat_group_simulation",
                "chat_group/cloud_batch_runs/chat_group_copilot",
                5,
                "inputs.json",
            ),
            (
                "chat_group/cloud_batch_runs/chat_group_simulation",
                "chat_group/cloud_batch_runs/chat_group_copilot",
                5,
                "inputs_using_default_value.json",
            ),
        ],
    )
    @pytest.mark.asyncio
    async def test_chat_group_batch_run(
        self, simulation_flow, copilot_flow, max_turn, input_file_name, dev_connections
    ):
        simulation_role = ChatRole(
            flow="flow.dag.yaml",  # Use relative path similar with runtime payload
            role="user",
            name="simulator",
            stop_signal="[STOP]",
            working_dir=get_flow_folder(simulation_flow),
            connections=dev_connections,
            inputs_mapping={
                "topic": "${data.topic}",
                "ground_truth": "${data.ground_truth}",
                "history": "${parent.conversation_history}",
            },
        )
        copilot_role = ChatRole(
            flow=get_yaml_file(copilot_flow),
            role="assistant",
            name="copilot",
            stop_signal="[STOP]",
            working_dir=get_flow_folder(copilot_flow),
            connections=dev_connections,
            inputs_mapping={"question": "${data.question}", "conversation_history": "${parent.conversation_history}"},
        )
        input_dirs = {"data": get_flow_inputs_file("chat_group/cloud_batch_runs", file_name=input_file_name)}
        output_dir = Path(mkdtemp())
        mem_run_storage = MemoryRunStorage()
    
        # register python proxy since current python proxy cannot execute single line
        ProxyFactory.register_executor("python", SingleLinePythonExecutorProxy)
        chat_group_orchestrator_proxy = await ChatGroupOrchestratorProxy.create(
            flow_file="", chat_group_roles=[simulation_role, copilot_role], max_turn=max_turn
        )
        batchEngine = BatchEngine(flow_file=None, working_dir=get_flow_folder("chat_group"), storage=mem_run_storage)
        batch_result = batchEngine.run(input_dirs, {}, output_dir, executor_proxy=chat_group_orchestrator_proxy)
    
        nlines = 3
        assert batch_result.total_lines == nlines
>       assert batch_result.completed_lines == nlines
E       assert 0 == 3
E        +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=0, failed_lines=3, node_status={'prompt.completed': 3, 'llm.failed': 3}, start_time=datetime.datetime(2024, 5, 8, 21, 42, 31, 846939), end_time=datetime.datetime(2024, 5, 8, 21, 42, 38, 872806), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.025867), error_summary=ErrorSummary(failed_user_error_lines=3, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'aa0cd649b8f330960f68937288e4a397a9886565'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=1, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'5de757029c418c7071d28d5d76927f05646298cc'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=2, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'0589b9d6062d9773c6214b3ed87057dcae5c3836'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:577: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [users/singankit/pf_eval_deps](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:users/singankit/pf_eval_deps++)

test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs_using_default_value.json] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 10s]
Raw output
assert 0 == 3
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=0, failed_lines=3, node_status={'prompt.completed': 3, 'llm.failed': 3}, start_time=datetime.datetime(2024, 5, 8, 21, 42, 42, 408972), end_time=datetime.datetime(2024, 5, 8, 21, 42, 49, 431300), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.022328), error_summary=ErrorSummary(failed_user_error_lines=3, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'c0143cb38a23d9802bece8ad722c6900ee87b293'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=1, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'f0eb8cce13cdebd68ff0f9239a81742978f35733'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=2, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'fb3b6ddda40e3130abcd4d412fe1d0dfb894ca62'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7fade1f9c490>
simulation_flow = 'chat_group/cloud_batch_runs/chat_group_simulation'
copilot_flow = 'chat_group/cloud_batch_runs/chat_group_copilot', max_turn = 5
input_file_name = 'inputs_using_default_value.json'
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'c2881c848bf048e9b3198a2a64464ef3', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "simulation_flow, copilot_flow, max_turn, input_file_name",
        [
            (
                "chat_group/cloud_batch_runs/chat_group_simulation",
                "chat_group/cloud_batch_runs/chat_group_copilot",
                5,
                "inputs.json",
            ),
            (
                "chat_group/cloud_batch_runs/chat_group_simulation",
                "chat_group/cloud_batch_runs/chat_group_copilot",
                5,
                "inputs_using_default_value.json",
            ),
        ],
    )
    @pytest.mark.asyncio
    async def test_chat_group_batch_run(
        self, simulation_flow, copilot_flow, max_turn, input_file_name, dev_connections
    ):
        simulation_role = ChatRole(
            flow="flow.dag.yaml",  # Use relative path similar with runtime payload
            role="user",
            name="simulator",
            stop_signal="[STOP]",
            working_dir=get_flow_folder(simulation_flow),
            connections=dev_connections,
            inputs_mapping={
                "topic": "${data.topic}",
                "ground_truth": "${data.ground_truth}",
                "history": "${parent.conversation_history}",
            },
        )
        copilot_role = ChatRole(
            flow=get_yaml_file(copilot_flow),
            role="assistant",
            name="copilot",
            stop_signal="[STOP]",
            working_dir=get_flow_folder(copilot_flow),
            connections=dev_connections,
            inputs_mapping={"question": "${data.question}", "conversation_history": "${parent.conversation_history}"},
        )
        input_dirs = {"data": get_flow_inputs_file("chat_group/cloud_batch_runs", file_name=input_file_name)}
        output_dir = Path(mkdtemp())
        mem_run_storage = MemoryRunStorage()
    
        # register python proxy since current python proxy cannot execute single line
        ProxyFactory.register_executor("python", SingleLinePythonExecutorProxy)
        chat_group_orchestrator_proxy = await ChatGroupOrchestratorProxy.create(
            flow_file="", chat_group_roles=[simulation_role, copilot_role], max_turn=max_turn
        )
        batchEngine = BatchEngine(flow_file=None, working_dir=get_flow_folder("chat_group"), storage=mem_run_storage)
        batch_result = batchEngine.run(input_dirs, {}, output_dir, executor_proxy=chat_group_orchestrator_proxy)
    
        nlines = 3
        assert batch_result.total_lines == nlines
>       assert batch_result.completed_lines == nlines
E       assert 0 == 3
E        +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=0, failed_lines=3, node_status={'prompt.completed': 3, 'llm.failed': 3}, start_time=datetime.datetime(2024, 5, 8, 21, 42, 42, 408972), end_time=datetime.datetime(2024, 5, 8, 21, 42, 49, 431300), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=7.022328), error_summary=ErrorSummary(failed_user_error_lines=3, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'c0143cb38a23d9802bece8ad722c6900ee87b293'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=1, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'f0eb8cce13cdebd68ff0f9239a81742978f35733'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=2, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nnan\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'nan\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'fb3b6ddda40e3130abcd4d412fe1d0dfb894ca62'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:577: AssertionError

Check warning on line 0 in tests.executor.e2etests.test_batch_engine.TestBatch

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [users/singankit/pf_eval_deps](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:users/singankit/pf_eval_deps++)

test_chat_group_batch_run_multi_inputs[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-simulation_input.json-copilot_input.json] (tests.executor.e2etests.test_batch_engine.TestBatch) failed

artifacts/Test Results (Python 3.9) (OS ubuntu-latest)/test-results.xml [took 11s]
Raw output
assert 0 == 3
 +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=0, failed_lines=3, node_status={'prompt.completed': 3, 'llm.failed': 3}, start_time=datetime.datetime(2024, 5, 8, 21, 42, 53, 31409), end_time=datetime.datetime(2024, 5, 8, 21, 43, 1, 60934), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=8.029525), error_summary=ErrorSummary(failed_user_error_lines=3, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'aa0cd649b8f330960f68937288e4a397a9886565'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=1, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'5de757029c418c7071d28d5d76927f05646298cc'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=2, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'0589b9d6062d9773c6214b3ed87057dcae5c3836'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines
self = <executor.e2etests.test_batch_engine.TestBatch object at 0x7fade1f513a0>
simulation_flow = 'chat_group/cloud_batch_runs/chat_group_simulation'
copilot_flow = 'chat_group/cloud_batch_runs/chat_group_copilot', max_turn = 5
simulation_input_file_name = 'simulation_input.json'
copilot_input_file_name = 'copilot_input.json'
dev_connections = {'aoai_assistant_connection': {'module': 'promptflow.connections', 'name': 'aoai_assistant_connection', 'type': 'Azure...ai.azure.com/', 'api_key': 'c2881c848bf048e9b3198a2a64464ef3', 'api_type': 'azure', 'api_version': '2024-02-01'}}, ...}

    @pytest.mark.parametrize(
        "simulation_flow, copilot_flow, max_turn, simulation_input_file_name, copilot_input_file_name",
        [
            (
                "chat_group/cloud_batch_runs/chat_group_simulation",
                "chat_group/cloud_batch_runs/chat_group_copilot",
                5,
                "simulation_input.json",
                "copilot_input.json",
            )
        ],
    )
    @pytest.mark.asyncio
    async def test_chat_group_batch_run_multi_inputs(
        self,
        simulation_flow,
        copilot_flow,
        max_turn,
        simulation_input_file_name,
        copilot_input_file_name,
        dev_connections,
    ):
        simulation_role = ChatRole(
            flow=get_yaml_file(simulation_flow),
            role="user",
            name="simulator",
            stop_signal="[STOP]",
            working_dir=get_flow_folder(simulation_flow),
            connections=dev_connections,
            inputs_mapping={
                "topic": "${simulation.topic}",
                "ground_truth": "${simulation.ground_truth}",
                "history": "${parent.conversation_history}",
            },
        )
        copilot_role = ChatRole(
            flow=get_yaml_file(copilot_flow),
            role="assistant",
            name="copilot",
            stop_signal="[STOP]",
            working_dir=get_flow_folder(copilot_flow),
            connections=dev_connections,
            inputs_mapping={
                "question": "${copilot.question}",
                "conversation_history": "${parent.conversation_history}",
            },
        )
        input_dirs = {
            "simulation": get_flow_inputs_file("chat_group/cloud_batch_runs", file_name=simulation_input_file_name),
            "copilot": get_flow_inputs_file("chat_group/cloud_batch_runs", file_name=copilot_input_file_name),
        }
        output_dir = Path(mkdtemp())
        mem_run_storage = MemoryRunStorage()
    
        # register python proxy since current python proxy cannot execute single line
        ProxyFactory.register_executor("python", SingleLinePythonExecutorProxy)
        chat_group_orchestrator_proxy = await ChatGroupOrchestratorProxy.create(
            flow_file="", chat_group_roles=[simulation_role, copilot_role], max_turn=max_turn
        )
        batchEngine = BatchEngine(flow_file=None, working_dir=get_flow_folder("chat_group"), storage=mem_run_storage)
        batch_result = batchEngine.run(input_dirs, {}, output_dir, executor_proxy=chat_group_orchestrator_proxy)
    
        nlines = 3
        assert batch_result.total_lines == nlines
>       assert batch_result.completed_lines == nlines
E       assert 0 == 3
E        +  where 0 = BatchResult(status=<Status.Completed: 'Completed'>, total_lines=3, completed_lines=0, failed_lines=3, node_status={'prompt.completed': 3, 'llm.failed': 3}, start_time=datetime.datetime(2024, 5, 8, 21, 42, 53, 31409), end_time=datetime.datetime(2024, 5, 8, 21, 43, 1, 60934), metrics={}, system_metrics=SystemMetrics(total_tokens=0, prompt_tokens=0, completion_tokens=0, duration=8.029525), error_summary=ErrorSummary(failed_user_error_lines=3, failed_system_error_lines=0, error_list=[LineError(line_number=0, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like apple\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'fruit\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'aa0cd649b8f330960f68937288e4a397a9886565'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=1, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nDo you like basketball\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'sport\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'5de757029c418c7071d28d5d76927f05646298cc'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}}), LineError(line_number=2, error={'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'messageFormat': '', 'messageParameters': {}, 'referenceCode': 'Tool/promptflow.tools.aoai', 'code': 'UserError', 'innerError': {'code': 'LLMError', 'innerError': None}, 'debugInfo': {'type': 'LLMError', 'message': 'OpenAI API hits exception: RecordItemMissingException: Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1003, in _exec\n    output, aggregation_inputs = self._exec_inner_with_trace(\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 908, in _exec_inner_with_trace\n    output, nodes_outputs = self._traverse_nodes(inputs, context)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1187, in _traverse_nodes\n    nodes_outputs, bypassed_nodes = self._submit_to_scheduler(context, inputs, batch_nodes)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/flow_executor.py", line 1242, in _submit_to_scheduler\n    return scheduler.execute(self._line_timeout_sec)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 131, in execute\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 113, in execute\n    self._dag_manager.complete_nodes(self._collect_outputs(completed_futures))\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 160, in _collect_outputs\n    each_node_result = each_future.result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 439, in result\n    return self.__get_result()\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result\n    raise self._exception\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/executor/_flow_nodes_scheduler.py", line 181, in _exec_single_node_in_thread\n    result = context.invoke_tool(node, f, kwargs=kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 90, in invoke_tool\n    result = self._invoke_tool_inner(node, f, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 201, in _invoke_tool_inner\n    raise e\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/_core/flow_execution_context.py", line 182, in _invoke_tool_inner\n    return f(**kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 624, in wrapper\n    raise LLMError(message=error_message)\n', 'innerException': {'type': 'RecordItemMissingException', 'message': 'Record item not found in file /home/runner/work/promptflow/promptflow/src/promptflow-recording/recordings/local/executor_node_cache.shelve.\nvalues: {"model": "gpt-35-turbo", "messages": [{"role": "system", "content": "You are an assistant that can answer philosophical questions.\\n\\nHere is the initial message from user:\\nWhat\'s the weather today\\n\\nHere is a chat history you had with the user:\\n{\'role\': \'user\', \'output\': \'weather\'}\\n\\nNow please continue the conversation with the user, just answer the question, no extra formatting is needed:"}, {"role": "assistant", "content": ""}], "temperature": 1.0, "top_p": 1.0, "n": 1, "stream": false, "user": "", "_args": [], "_func": "Completions.create"}\n', 'stackTrace': '\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/common.py", line 543, in wrapper\n    return func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tools/aoai.py", line 165, in chat\n    completion = self._client.chat.completions.create(**params)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_integrations/_openai_injector.py", line 88, in wrapper\n    return f(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/tracing/_trace.py", line 470, in wrapped\n    output = func(*args, **kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/openai_inject_recording.py", line 23, in wrapper\n    return call_func(f, args, kwargs)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/mock_tool.py", line 68, in call_func\n    return RecordStorage.get_instance().get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 401, in get_record\n    return self.record_cache.get_record(input_dict)\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 291, in get_record\n    raise RecordItemMissingException(\n', 'innerException': {'type': 'KeyError', 'message': "'0589b9d6062d9773c6214b3ed87057dcae5c3836'", 'stackTrace': 'Traceback (most recent call last):\n  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/promptflow/recording/local/record_storage.py", line 289, in get_record\n    line_record_pointer = self.file_records_pointer[hash_value]\n', 'innerException': None}}}})], aggr_error_dict={}, batch_error_dict=None)).completed_lines

/home/runner/work/promptflow/promptflow/src/promptflow/tests/executor/e2etests/test_batch_engine.py:665: AssertionError

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [users/singankit/pf_eval_deps](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:users/singankit/pf_eval_deps++)

5 skipped tests found

There are 5 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_package_tool_with_conn[assistant-with-package-tool]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_tool_with_connection[assistant-tool-with-connection-line_input0]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_with_image[food-calorie-assistant-line_input0]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[package_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[package_tools-search_by_text-flow_inputs5-None]

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Executor E2E Test Result [users/singankit/pf_eval_deps](https://github.com/microsoft/promptflow/actions/workflows/promptflow-executor-e2e-test.yml?query=branch:users/singankit/pf_eval_deps++)

242 tests found

There are 242 tests, see "Raw output" for the full list of tests.
Raw output
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_aggregate_bypassed_nodes
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_all_nodes_bypassed
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_batch_run_activate
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[activate_condition_always_met]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[activate_with_no_inputs]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[all_depedencies_bypassed_with_activate_met]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_flow_run_activate[conditional_flow_with_activate]
tests.executor.e2etests.test_activate.TestExecutorActivate ‑ test_invalid_activate_config
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_package_tool_with_conn[assistant-with-package-tool]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_tool_with_connection[assistant-tool-with-connection-line_input0]
tests.executor.e2etests.test_assistant.TestAssistant ‑ test_assistant_with_image[food-calorie-assistant-line_input0]
tests.executor.e2etests.test_async.TestAsync ‑ test_exec_line_async[async_tools-expected_result0]
tests.executor.e2etests.test_async.TestAsync ‑ test_exec_line_async[async_tools_with_sync_tools-expected_result1]
tests.executor.e2etests.test_async.TestAsync ‑ test_executor_node_concurrency[async_tools-concurrency_levels0-expected_concurrency0]
tests.executor.e2etests.test_async.TestAsync ‑ test_executor_node_concurrency[async_tools_with_sync_tools-concurrency_levels1-expected_concurrency1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume[web_classification-web_classification_default_20240207_165606_643000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume_aggregation[classification_accuracy_evaluation-classification_accuracy_evaluation_default_20240208_152402_694000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_resume_aggregation_with_image[eval_flow_with_image_resume-eval_flow_with_image_resume_default_20240305_111258_103000]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_failure[connection_as_input-input_mapping0-InputNotFound-The input for flow cannot be empty in batch mode. Please review your flow and provide valid inputs.]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_failure[script_with___file__-input_mapping1-EmptyInputsData-Couldn't find any inputs data at the given input paths. Please review the provided path and consider resubmitting.]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_in_existing_loop
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input0-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input1-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_line_result[simple_aggregation-batch_input2-str]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_then_eval
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_run_with_aggregation_failure
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_storage
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_default_input
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_line_number
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_metrics
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_openai_metrics
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_batch_with_partial_failure
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs_using_default_value.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_early_stop[chat_group/cloud_batch_runs/chat_group_copilot-chat_group/cloud_batch_runs/chat_group_simulation_error-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_early_stop[chat_group/cloud_batch_runs/chat_group_simulation_error-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_multi_inputs[chat_group/cloud_batch_runs/chat_group_simulation-chat_group/cloud_batch_runs/chat_group_copilot-5-simulation_input.json-copilot_input.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_chat_group_batch_run_stop_signal[chat_group/cloud_batch_runs/chat_group_simulation_stop_signal-chat_group/cloud_batch_runs/chat_group_copilot-5-inputs.json]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_forkserver_mode_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[prompt_tools-inputs_mapping1]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[sample_flow_with_functions-inputs_mapping3]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[script_with___file__-inputs_mapping2]
tests.executor.e2etests.test_batch_engine.TestBatch ‑ test_spawn_mode_batch_run[web_classification_no_variants-inputs_mapping0]
tests.executor.e2etests.test_batch_server.TestBatchServer ‑ test_batch_run_in_server_mode
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_timeout[one_line_of_bulktest_timeout-3-600-Line 2 execution timeout for exceeding 3 seconds-Status.Completed]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_timeout[one_line_of_bulktest_timeout-600-5-Line 2 execution timeout for exceeding-Status.Failed]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_with_line_timeout[one_line_of_bulktest_timeout]
tests.executor.e2etests.test_batch_timeout.TestBatchTimeout ‑ test_batch_with_one_line_timeout[one_line_of_bulktest_timeout]
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_concurrent_run
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_concurrent_run_with_exception
tests.executor.e2etests.test_concurent_execution.TestConcurrentExecution ‑ test_linear_run
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_cancel
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_execution_error
tests.executor.e2etests.test_csharp_executor_proxy.TestCSharpExecutorProxy ‑ test_batch_validation_error
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[basic_callable_class-inputs_mapping2-<lambda>-init_kwargs2]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[callable_class_with_primitive-inputs_mapping3-<lambda>-init_kwargs3]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[dummy_flow_with_trace-inputs_mapping0-<lambda>-None]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run[flow_with_dataclass_output-inputs_mapping1-<lambda>-None]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_callable_entry
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_init_multiple_workers[1-<lambda>]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_init_multiple_workers[2-<lambda>]
tests.executor.e2etests.test_eager_flow.TestEagerFlow ‑ test_batch_run_with_invalid_case
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail_with_exception[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_line_fail_with_exception[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_node_fail[async_tools_failures-async_fail-In tool raise_an_exception_async: dummy_input]
tests.executor.e2etests.test_executor_execution_failures.TestExecutorFailures ‑ test_executor_exec_node_fail[sync_tools_failures-sync_fail-In tool raise_an_exception: dummy_input]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_chat_flow_stream_mode
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_convert_flow_input_types[simple_flow_with_python_tool]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_execute_flow[output-intermediate-True-2]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_execute_flow[output_1-intermediate_1-False-1]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_creation_with_default_input
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_creation_with_default_variants[web_classification]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[async_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[async_tools_with_sync_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[connection_as_input]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[package_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[prompt_tools]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[script_with___file__]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[script_with_import]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[tool_with_assistant_definition]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_line[web_classification_no_variants]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[connection_as_input-conn_node-None-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[package_tools-search_by_text-flow_inputs5-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[prompt_tools-summarize_text_content_prompt-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node1-flow_inputs2-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node2-None-dependency_nodes_outputs3]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with___file__-node3-None-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[script_with_import-node1-flow_inputs8-None]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[simple_aggregation-accuracy-flow_inputs7-dependency_nodes_outputs7]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node[web_classification_no_variants-summarize_text_content-flow_inputs0-dependency_nodes_outputs0]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_exec_node_with_llm_node
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_for_script_tool_with_init
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_executor_node_overrides
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_flow_with_no_inputs_and_output[no_inputs_outputs]
tests.executor.e2etests.test_executor_happypath.TestExecutor ‑ test_long_running_log
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_input_type_invalid[simple_flow_with_python_tool-inputs_mapping0-The input for flow is incorrect. The value for flow input 'num' in line 0 of input data does not match the expected type 'int'. Please change flow input type or adjust the input value in your input data.-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input0-True-Exception]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input1-False-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input2-True-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_batch_run_raise_on_line_failure[simple_flow_with_python_tool-batch_input3-False-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type[source_file_missing-flow.dag.python.yaml-ResolveToolError-InvalidSource]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_input_reference_invalid-flow.dag.yaml-InputReferenceNotFound-None-Invalid node definitions found in the flow graph. Node 'divide_num' references flow input 'num_1' which is not defined in your flow. To resolve this issue, please review your flow, ensuring that you either add the missing flow inputs or adjust node reference to the correct flow input.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_llm_with_wrong_conn-flow.dag.yaml-ResolveToolError-InvalidConnectionType-Tool load failed in 'wrong_llm': (InvalidConnectionType) Connection type CustomConnection is not supported for LLM.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[flow_output_reference_invalid-flow.dag.yaml-EmptyOutputReference-None-The output 'content' for flow is incorrect. The reference is not specified for the output 'content' in the flow. To rectify this, ensure that you accurately specify the reference in the flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[node_circular_dependency-flow.dag.yaml-NodeCircularDependency-None-Invalid node definitions found in the flow graph. Node circular dependency has been detected among the nodes in your flow. Kindly review the reference relationships for the nodes ['divide_num', 'divide_num_1', 'divide_num_2'] and resolve the circular reference issue in the flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[node_reference_not_found-flow.dag.yaml-NodeReferenceNotFound-None-Invalid node definitions found in the flow graph. Node 'divide_num_2' references a non-existent node 'divide_num_3' in your flow. Please review your flow to ensure that the node name is accurately specified.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[nodes_names_duplicated-flow.dag.yaml-DuplicateNodeName-None-Invalid node definitions found in the flow graph. Node with name 'stringify_num' appears more than once in the node definitions in your flow, which is not allowed. To address this issue, please review your flow and either rename or remove nodes with identical names.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[outputs_reference_not_valid-flow.dag.yaml-OutputReferenceNotFound-None-The output 'content' for flow is incorrect. The output 'content' references non-existent node 'another_stringify_num' in your flow. To resolve this issue, please carefully review your flow and correct the reference definition for the output in question.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[outputs_with_invalid_flow_inputs_ref-flow.dag.yaml-OutputReferenceNotFound-None-The output 'num' for flow is incorrect. The output 'num' references non-existent flow input 'num11' in your flow. Please carefully review your flow and correct the reference definition for the output in question.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_executor_create_failure_type_and_message[source_file_missing-flow.dag.jinja.yaml-ResolveToolError-InvalidSource-Tool load failed in 'summarize_text_content': (InvalidSource) Node source path 'summarize_text_content__variant_1.jinja2' is invalid on node 'summarize_text_content'.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_execution_errors[flow_output_unserializable-line_input0-FlowOutputUnserializable-The output 'content' for flow is incorrect. The output value is not JSON serializable. JSON dump failed: (TypeError) Object of type UnserializableClass is not JSON serializable. Please verify your flow output and make sure the value serializable.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[python_tool_with_simple_image_without_default-line_input2-InputNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[simple_flow_with_python_tool-line_input0-InputNotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_input_type_invalid[simple_flow_with_python_tool-line_input1-InputTypeError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_with_duplicated_inputs[llm_tool_with_duplicated_inputs-Invalid inputs {'prompt'} in prompt template of node llm_tool_with_duplicated_inputs. These inputs are duplicated with the parameters of AzureOpenAI.completion.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_flow_run_with_duplicated_inputs[prompt_tool_with_duplicated_inputs-Invalid inputs {'template'} in prompt template of node prompt_tool_with_duplicated_inputs. These inputs are duplicated with the reserved parameters of prompt tool.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[invalid_connection-ResolveToolError-GetConnectionError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[tool_type_missing-ResolveToolError-NotImplementedError]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[wrong_api-ResolveToolError-APINotFound]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_dag[wrong_module-FailedToImportModule-None]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_invalid_flow_run_inputs_should_not_saved_to_run_info
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_node_topology_in_order[web_classification_no_variants-web_classification_no_variants_unordered]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root0-simple_flow_with_python_tool-divide_num-line_input0-InputNotFound-The input for node is incorrect. Node input 'num' is not found in input data for node 'divide_num'. Please verify the inputs data for the node.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root1-simple_flow_with_python_tool-divide_num-line_input1-InputTypeError-The input for node is incorrect. Value for input 'num' of node 'divide_num' is not type 'int'. Please review and rectify the input data.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root2-flow_input_reference_invalid-divide_num-line_input2-InputNotFound-The input for node is incorrect. Node input 'num_1' is not found from flow inputs of node 'divide_num'. Please review the node definition in your flow.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root3-simple_flow_with_python_tool-bad_node_name-line_input3-SingleNodeValidationError-Validation failed when attempting to execute the node. Node 'bad_node_name' is not found in flow 'flow.dag.yaml'. Please change node name or correct the flow file.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_single_node_input_type_invalid[path_root4-node_missing_type_or_source-divide_num-line_input4-SingleNodeValidationError-Validation failed when attempting to execute the node. Properties 'source' or 'type' are not specified for Node 'divide_num' in flow 'flow.dag.yaml'. Please make sure these properties are in place and try again.]
tests.executor.e2etests.test_executor_validation.TestValidation ‑ test_valid_flow_run_inpust_should_saved_to_run_info
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[chat_flow_with_image-input_dirs3-inputs_mapping3-answer-2-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[eval_flow_with_composite_image-input_dirs5-inputs_mapping5-output-2-True]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[eval_flow_with_simple_image-input_dirs4-inputs_mapping4-output-2-True]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_composite_image-input_dirs2-inputs_mapping2-output-2-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_simple_image-input_dirs0-inputs_mapping0-output-4-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_engine_with_image[python_tool_with_simple_image_with_default-input_dirs1-inputs_mapping1-output-4-False]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_batch_run_then_eval_with_image
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs7]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_composite_image-inputs8]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_aggregation_with_image[eval_flow_with_simple_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[chat_flow_with_image-inputs9]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs7]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_composite_image-inputs8]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_image_nested_api_calls-inputs10]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_line_with_image[python_tool_with_simple_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node-flow_inputs2-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node_2-flow_inputs3-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_composite_image-python_node_3-flow_inputs4-dependency_nodes_outputs4]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_simple_image-python_node-flow_inputs0-None]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image[python_tool_with_simple_image-python_node_2-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[None-False-.]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[None-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-False-test_path]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithImage ‑ test_executor_exec_node_with_invalid_default_value[python_tool_with_invalid_default_value-python_node_2-flow_inputs0-dependency_nodes_outputs0]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_batch_engine_with_image[chat_flow_with_openai_vision_image-input_dirs1-inputs_mapping1-answer-2]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_batch_engine_with_image[python_tool_with_openai_vision_image-input_dirs0-inputs_mapping0-output-4]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[chat_flow_with_openai_vision_image-inputs6]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs0]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs1]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs2]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs3]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs4]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_line_with_image[python_tool_with_openai_vision_image-inputs5]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image[python_tool_with_openai_vision_image-python_node-flow_inputs0-None]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image[python_tool_with_openai_vision_image-python_node_2-flow_inputs1-dependency_nodes_outputs1]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[None-False-.]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[None-True-test_storage]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-False-test_path]
tests.executor.e2etests.test_image.TestExecutorWithOpenaiVisionImage ‑ test_executor_exec_node_with_image_storage_and_path[test_path-True-test_storage]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[flow_with_langchain_traces-inputs_mapping0]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[openai_chat_api_flow-inputs_mapping1]
tests.executor.e2etests.test_langchain.TestLangchain ‑ test_batch_with_langchain[openai_completion_api_flow-inputs_mapping2]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_activate_config_log
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_async_log_in_worker_thread
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_batch_run_flow_logs[flow_root_dir0-print_input_flow-8]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_batch_run_flow_logs[flow_root_dir1-print_input_flex-2]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_executor_logs[print_input_flow]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_log_progress[simple_flow_with_ten_inputs-inputs_mapping0]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_long_run_log
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_node_logs[print_input_flow]
tests.executor.e2etests.test_logs.TestExecutorLogs ‑ test_node_logs_in_executor_logs[print_input_flow]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_custom_llm_tool_with_duplicated_inputs
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_executor_package_tool_with_conn
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_executor_package_with_prompt_tool
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_execution[wrong_package_in_package_tools-ResolveToolError-PackageToolNotFoundError-Tool load failed in 'search_by_text': (PackageToolNotFoundError) Package tool 'promptflow.tools.serpapi11.SerpAPI.search' is not found in the current environment. All available package tools are: ['promptflow.tools.azure_content_safety.AzureContentSafety.analyze_text', 'promptflow.tools.azure_detect.AzureDetect.get_language'].]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_execution[wrong_tool_in_package_tools-ResolveToolError-PackageToolNotFoundError-Tool load failed in 'search_by_text': (PackageToolNotFoundError) Package tool 'promptflow.tools.serpapi.SerpAPI.search_11' is not found in the current environment. All available package tools are: ['promptflow.tools.azure_content_safety.AzureContentSafety.analyze_text', 'promptflow.tools.azure_detect.AzureDetect.get_language'].]
tests.executor.e2etests.test_package_tool.TestPackageTool ‑ test_package_tool_load_error[tool_with_init_error-Tool load failed in 'tool_with_init_error': (ToolLoadError) Failed to load package tool 'Tool with init error': (Exception) Tool load error.]
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_dynamic_list
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_enabled_by_value
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_generated_by
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_dynamic_list
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_enabled_by
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_icon
tests.executor.e2etests.test_script_tool_generator.TestScriptToolGenerator ‑ test_generate_script_tool_meta_with_invalid_schema
tests.executor.e2etests.test_telemetry.TestExecutorTelemetry ‑ test_executor_openai_telemetry
tests.executor.e2etests.test_telemetry.TestExecutorTelemetry ‑ test_executor_openai_telemetry_with_batch_run
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_generator_tools
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[llm_tool-inputs4]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[llm_tool-inputs5]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_chat_api_flow-inputs0]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_chat_api_flow-inputs1]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_completion_api_flow-inputs2]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_executor_openai_api_flow[openai_completion_api_flow-inputs3]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_flow_with_trace[flow_with_trace]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_flow_with_trace[flow_with_trace_async]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_trace_behavior_with_generator_node[False]
tests.executor.e2etests.test_traces.TestExecutorTraces ‑ test_trace_behavior_with_generator_node[True]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_flow_with_nested_tool
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_flow_with_traced_function
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace[flow_with_trace-inputs0-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace[flow_with_trace_async-inputs1-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_batch
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_embedding[openai_embedding_api_flow-inputs0-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_embedding[openai_embedding_api_flow_with_token-inputs1-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[flow_with_async_llm_tasks-inputs5-False-6]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[llm_tool-inputs4-False-4]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_chat_api_flow-inputs0-False-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_chat_api_flow-inputs1-True-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_completion_api_flow-inputs2-False-3]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_llm[openai_completion_api_flow-inputs3-True-5]
tests.executor.e2etests.test_traces.TestOTelTracer ‑ test_otel_trace_with_prompt[llm_tool-inputs0-joke.jinja2]