Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update openai model version #1217

Merged
merged 2 commits into from
Apr 22, 2024
Merged

update openai model version #1217

merged 2 commits into from
Apr 22, 2024

Conversation

geekan
Copy link
Owner

@geekan geekan commented Apr 22, 2024

User description

update openai model version. some of them are config files. need to check the CI/CD scripts


Type

enhancement


Description

  • Updated model versions across various Python scripts and configuration files to gpt-4-turbo and similar.
  • Adjusted token pricing details and model configurations in utility scripts.
  • Updated model version references in documentation and example configurations.

Changes walkthrough

Relevant files
Enhancement
8 files
debate_simple.py
Update Model Versions in Debate Simulation                             

examples/debate_simple.py

  • Updated model versions for configurations in a debate simulation
    script.
  • +2/-2     
    software_company.py
    Update Default Model Configuration                                             

    metagpt/software_company.py

  • Updated default model configuration in a software company simulation.
  • +1/-1     
    token_counter.py
    Adjust Token Pricing and Model Configurations                       

    metagpt/utils/token_counter.py

    • Adjusted token pricing and model configurations.
    +4/-4     
    test_context_mixin.py
    Update Model Configurations in Context Mixins                       

    tests/metagpt/test_context_mixin.py

    • Updated model configurations in testing context mixins.
    +4/-4     
    test_ut_writer.py
    Update Model Version in Utility Writer Test                           

    tests/metagpt/tools/test_ut_writer.py

    • Updated model version in unit test for utility writer.
    +1/-1     
    test_cost_manager.py
    Update Model Version in Cost Manager Tests                             

    tests/metagpt/utils/test_cost_manager.py

    • Updated model version in cost manager tests.
    +2/-2     
    config2.example.yaml
    Simplify Model Configuration in Example Config                     

    config/config2.example.yaml

    • Simplified model configuration in example config file.
    +3/-8     
    config2.yaml
    Update Model Version in Test Configuration                             

    tests/config2.yaml

    • Updated model version in test configuration.
    +1/-1     
    Documentation
    2 files
    README.md
    Update Model Version in README                                                     

    README.md

    • Updated model version in README configuration example.
    +1/-1     
    FAQ-EN.md
    Update Model References in FAQ Document                                   

    docs/FAQ-EN.md

    • Updated model references in FAQ document.
    +2/-2     

    PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @codiumai-pr-agent-pro codiumai-pr-agent-pro bot added the enhancement New feature or request label Apr 22, 2024
    Copy link

    PR Description updated to latest commit (dfbdeab)

    Copy link

    PR Review

    ⏱️ Estimated effort to review [1-5]

    2, because the changes are straightforward model version updates across multiple files, which are easy to verify but require careful attention to ensure consistency and correctness across all references.

    🧪 Relevant tests

    Yes

    🔍 Possible issues

    Possible Redundancy: In metagpt/software_company.py, the comment line still mentions two identical model names ("gpt-4-turbo"), which might be a leftover from previous versions and could be cleaned up for clarity.

    🔒 Security concerns

    No


    ✨ Review tool usage guide:

    Overview:
    The review tool scans the PR code changes, and generates a PR review which includes several types of feedbacks, such as possible PR issues, security threats and relevant test in the PR. More feedbacks can be added by configuring the tool.

    The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR.

    • When commenting, to edit configurations related to the review tool (pr_reviewer section), use the following template:
    /review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
    
    [pr_reviewer]
    some_config1=...
    some_config2=...
    

    See the review usage page for a comprehensive guide on using this tool.

    Copy link

    PR Code Suggestions

    CategorySuggestions                                                                                                                                                       
    Enhancement
    Correct or remove the redundant comment in the model configuration.

    The model configuration line contains a redundant comment which might be confusing. It's
    better to remove or correct the comment to reflect the actual options available.

    metagpt/software_company.py [128]

    -model: "gpt-4-turbo"  # or gpt-4-turbo / gpt-4-turbo
    +model: "gpt-4-turbo"  # specify the model version
     
    Bug
    Remove duplicate dictionary keys to prevent potential bugs.

    The dictionary key 'gpt-4-turbo' is duplicated in the token pricing dictionary, which
    could lead to potential bugs or unexpected behavior. Remove the duplicate entry.

    metagpt/utils/token_counter.py [37]

    -"gpt-4-turbo": {"prompt": 0.01, "completion": 0.03},
    +"gpt-4-turbo": {"prompt": 0.01, "completion": 0.03},  # Ensure no duplicate keys
     
    Maintainability
    Document the reason for re-adding previously removed dictionary keys.

    The dictionary key 'gpt-4-0125-preview' is added back, but it was previously removed. If
    this was intentional, ensure it's documented or communicated to avoid confusion.

    metagpt/utils/token_counter.py [36]

    -"gpt-4-0125-preview": {"prompt": 0.01, "completion": 0.03},
    +"gpt-4-0125-preview": {"prompt": 0.01, "completion": 0.03},  # Re-added for specific reasons (document or comment why)
     
    Possible issue
    Clarify why different models have identical token pricing.

    The token pricing for 'gpt-4-turbo' is set identically to 'gpt-4-turbo-preview'. If this
    is not an error, consider adding a comment to clarify why both models have the same
    pricing.

    metagpt/utils/token_counter.py [37]

    -"gpt-4-turbo": {"prompt": 0.01, "completion": 0.03},
    +"gpt-4-turbo": {"prompt": 0.01, "completion": 0.03},  # Pricing identical to gpt-4-turbo-preview, confirm if correct
     
    Best practice
    Ensure consistent update of token pricing values across the codebase.

    The token pricing values for 'gpt-4-turbo' and 'gpt-4-0125-preview' are added without
    removing the old values, leading to potential redundancy or confusion. Ensure that the
    values are updated correctly across all relevant sections.

    metagpt/utils/token_counter.py [37]

    -"gpt-4-turbo": {"prompt": 0.01, "completion": 0.03},
    +"gpt-4-turbo": {"prompt": 0.01, "completion": 0.03},  # Updated to reflect new pricing model
     

    ✨ Improve tool usage guide:

    Overview:
    The improve tool scans the PR code changes, and automatically generates suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.

    • When commenting, to edit configurations related to the improve tool (pr_code_suggestions section), use the following template:
    /improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
    
    [pr_code_suggestions]
    some_config1=...
    some_config2=...
    

    See the improve usage page for a comprehensive guide on using this tool.

    Copy link

    codiumai-pr-agent-pro bot commented Apr 22, 2024

    CI Failure Feedback

    (Checks updated until commit 9a7c195)

    Action: build (3.9)

    Failed stage: Show failed tests and overall summary [❌]

    Failure summary:

    The action failed due to multiple issues:

  • A ValueError was raised because API calls were not allowed in the current test setting, indicating
    that tests were not properly mocked or expected API responses were not added to
    tests/data/rsp_cache.json.
  • Several tests related to API interactions failed due to AuthenticationError, indicating incorrect or
    missing API keys.
  • TypeError occurred because static methods were incorrectly called as if they were callable,
    suggesting a misunderstanding or misuse of static methods in the test cases.
  • AssertionError and ValidationError were raised in various tests, indicating issues with assertions
    in the test logic or validation errors in data models.
  • A subprocess.CalledProcessError was triggered by a non-zero exit status from a subprocess command,
    indicating an external command failure.
  • AttributeError was noted where an object was expected to have a specific attribute that was not
    present, pointing to incorrect assumptions about the object's structure.

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    2675:  collected 647 items
    2676:  tests/metagpt/actions/di/test_ask_review.py .                            [  0%]
    2677:  tests/metagpt/actions/di/test_execute_nb_code.py ..........              [  1%]
    2678:  tests/metagpt/actions/di/test_write_analysis_code.py ..F                 [  2%]
    2679:  tests/metagpt/actions/di/test_write_plan.py ..                           [  2%]
    2680:  tests/metagpt/actions/test_action.py .....                               [  3%]
    2681:  tests/metagpt/actions/test_action_node.py .............                  [  5%]
    2682:  tests/metagpt/actions/test_action_outcls_registry.py .                   [  5%]
    2683:  tests/metagpt/actions/test_debug_error.py .                              [  5%]
    ...
    
    2878:  tests/metagpt/utils/test_s3.py .                                         [ 93%]
    2879:  tests/metagpt/utils/test_save_code.py ...                                [ 93%]
    2880:  tests/metagpt/utils/test_serialize.py ..                                 [ 93%]
    2881:  tests/metagpt/utils/test_session.py .                                    [ 94%]
    2882:  tests/metagpt/utils/test_text.py .....................                   [ 97%]
    2883:  tests/metagpt/utils/test_token_counter.py ........                       [ 98%]
    2884:  tests/metagpt/utils/test_tree.py ...F....                                [ 99%]
    2885:  tests/metagpt/utils/test_visual_graph_repo.py .                          [100%]
    2886:  =================================== FAILURES ===================================
    ...
    
    2896:  ### execution result
    2897:  ## Current Task
    2898:  import pandas and load the dataset from 'test.csv'.
    2899:  ## Task Guidance
    2900:  Write complete code for 'Current Task'. And avoid duplicating code from 'Finished Tasks', such as repeated import of packages, reading data, etc.
    2901:  Specifically,
    2902:  """
    2903:  wrong_code = """import pandas as pd\ndata = pd.read_excel('test.csv')\ndata"""  # use read_excel to read a csv
    2904:  error = """
    2905:  Traceback (most recent call last):
    2906:  File "<stdin>", line 2, in <module>
    2907:  File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 478, in read_excel
    2908:  io = ExcelFile(io, storage_options=storage_options, engine=engine)
    2909:  File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1500, in __init__
    2910:  raise ValueError(
    2911:  ValueError: Excel file format cannot be determined, you must specify an engine manually.
    2912:  """
    2913:  working_memory = [
    2914:  Message(content=wrong_code, role="assistant"),
    2915:  Message(content=error, role="user"),
    ...
    
    2927:  metagpt/actions/di/write_analysis_code.py:32: in _debug_with_reflection
    2928:  rsp = await self._aask(reflection_prompt, system_msgs=[REFLECTION_SYSTEM_MSG])
    2929:  metagpt/actions/action.py:93: in _aask
    2930:  return await self.llm.aask(prompt, system_msgs)
    2931:  tests/mock/mock_llm.py:98: in aask
    2932:  rsp = await self._mock_rsp(msg_key, self.original_aask, msg, system_msgs, format_msgs, images, timeout, stream)
    2933:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    2934:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa5345e90a0>
    2935:  msg_key = 'You are an AI Python assistant. You will be given your previous implementation code of a task, runtime error results,...str = "Reflection on previous implementation",\n    "improved_impl": str = "Refined code after reflection.",\n}\n```\n'
    2936:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa5345e90a0>>
    2937:  args = ('\n[example]\nHere is an example of debugging with reflection.\n\n[previous impl]:\nassistant:\n```python\ndef add(a:...results, and a hint to change the implementation appropriately. Write your full implementation.'], None, None, 3, True)
    2938:  kwargs = {}
    2939:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    2940:  if msg_key not in self.rsp_cache:
    2941:  if not self.allow_open_api_call:
    2942:  >               raise ValueError(
    2943:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    2944:  "or add expected api response in tests/data/rsp_cache.json. "
    2945:  f"The prompt you want for api call: {msg_key}"
    2946:  )
    2947:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are an AI Python assistant. You will be given your previous implementation code of a task, runtime error results, and a hint to change the implementation appropriately. Write your full implementation.#SYSTEM_MSG_END#
    ...
    
    2954:  E               def add(a: int, b: int) -> int:
    2955:  E                  """
    2956:  E                  Given integers a and b, return the total value of a and b.
    2957:  E                  """
    2958:  E                  return a - b
    2959:  E               ```
    2960:  E               
    2961:  E               user:
    2962:  E               Tests failed:
    2963:  E               assert add(1, 2) == 3 # output: -1
    2964:  E               assert add(1, 3) == 4 # output: -2
    2965:  E               
    2966:  E               [reflection on previous impl]:
    2967:  E               The implementation failed the test cases where the input integers are 1 and 2. The issue arises because the code does not add the two integers together, but instead subtracts the second integer from the first. To fix this issue, we should change the operator from `-` to `+` in the return statement. This will ensure that the function returns the correct output for the given input.
    ...
    
    2971:  E                  """
    2972:  E                  Given integers a and b, return the total value of a and b.
    2973:  E                  """
    2974:  E                  return a + b
    2975:  E               
    2976:  E               [/example]
    2977:  E               
    2978:  E               [context]
    2979:  E               [{'role': 'user', 'content': "\n# User Requirement\nread a dataset test.csv and print its head\n\n# Plan Status\n\n    ## Finished Tasks\n    ### code\n    ```python\n    ```\n\n    ### execution result\n\n    ## Current Task\n    import pandas and load the dataset from 'test.csv'.\n\n    ## Task Guidance\n    Write complete code for 'Current Task'. And avoid duplicating code from 'Finished Tasks', such as repeated import of packages, reading data, etc.\n    Specifically, \n    \n\n# Tool Info\n\n\n# Constraints\n- Take on Current Task if it is in Plan Status, otherwise, tackle User Requirement directly.\n- Ensure the output new code is executable in the same Jupyter notebook as the previous executed code.\n- Always prioritize using pre-defined tools for the same functionality.\n\n# Output\nWhile some concise thoughts are helpful, code is absolutely required. Always output one and only one code block in your response. Output code in the following format:\n```python\nyour code\n```\n"}, {'role': 'assistant', 'content': "import pandas as pd\ndata = pd.read_excel('test.csv')\ndata"}, {'role': 'user', 'content': '\n    Traceback (most recent call last):\n        File "<stdin>", line 2, in <module>\n        File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 478, in read_excel\n            io = ExcelFile(io, storage_options=storage_options, engine=engine)\n        File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1500, in __init__\n            raise ValueError(\n        ValueError: Excel file format cannot be determined, you must specify an engine manually.\n    '}]
    ...
    
    2982:  E               [assistant: import pandas as pd
    2983:  E               data = pd.read_excel('test.csv')
    2984:  E               data, user: 
    2985:  E                   Traceback (most recent call last):
    2986:  E                       File "<stdin>", line 2, in <module>
    2987:  E                       File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 478, in read_excel
    2988:  E                           io = ExcelFile(io, storage_options=storage_options, engine=engine)
    2989:  E                       File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1500, in __init__
    2990:  E                           raise ValueError(
    2991:  E                       ValueError: Excel file format cannot be determined, you must specify an engine manually.
    2992:  E                   ]
    2993:  E               
    2994:  E               [instruction]
    2995:  E               Analyze your previous code and error in [context] step by step, provide me with improved method and code. Remember to follow [context] requirement. Don't forget to write code for steps behind the error step.
    2996:  E               Output a json following the format:
    2997:  E               ```json
    2998:  E               {
    2999:  E                   "reflection": str = "Reflection on previous implementation",
    3000:  E                   "improved_impl": str = "Refined code after reflection.",
    3001:  E               }
    3002:  E               ```
    3003:  tests/mock/mock_llm.py:114: ValueError
    3004:  _____________________________ test_write_code_deps _____________________________
    3005:  self = <AsyncRetrying object at 0x7fa5686e01f0 (stop=<tenacity.stop.stop_after_attempt object at 0x7fa5686e04f0>, wait=<tenac...0x7fa57a9918e0>, before=<function before_nothing at 0x7fa57a9941f0>, after=<function after_nothing at 0x7fa57a994550>)>
    3006:  fn = <function WriteCode.write_code at 0x7fa568617dc0>
    3007:  args = (WriteCode, "\nNOTICE\nRole: You are a professional engineer; the main goal is to write google-style, elegant, modular...ing a external variable/module, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n\n")
    3008:  kwargs = {}
    3009:  retry_state = <RetryCallState 140346794727504: attempt #6; slept for 19.7; last result: failed (ValueError In current test setting, ... using a external variable/module, make sure you import it first.
    ...
    
    3032:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa535dc6700>
    3033:  msg_key = "\nNOTICE\nRole: You are a professional engineer; the main goal is to write google-style, elegant, modular, easy to re...sing a external variable/module, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n\n"
    3034:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa535dc6700>>
    3035:  args = ("\nNOTICE\nRole: You are a professional engineer; the main goal is to write google-style, elegant, modular, easy to r...ule, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n\n", None, None, None, 3, True)
    3036:  kwargs = {}
    3037:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    3038:  if msg_key not in self.rsp_cache:
    3039:  if not self.allow_open_api_call:
    3040:  >               raise ValueError(
    3041:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    3042:  "or add expected api response in tests/data/rsp_cache.json. "
    3043:  f"The prompt you want for api call: {msg_key}"
    3044:  )
    3045:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    3061:  E               ```if __name__ == "__main__":
    3062:  E               main()```
    3063:  E               ```
    3064:  E               
    3065:  E               ## Debug logs
    3066:  E               ```text
    3067:  E               E.......F
    3068:  E               ======================================================================
    3069:  E               ERROR: test_add_new_tile (__main__.TestGame)
    3070:  E               ----------------------------------------------------------------------
    3071:  E               Traceback (most recent call last):
    3072:  E                 File "/Users/xx/tests/test_game.py", line 104, in test_add_new_tile
    3073:  E                   self.assertIn(self.game.grid[empty_cells[0][0]][empty_cells[0][1]], [2, 4])
    3074:  E               IndexError: list index out of range
    3075:  E               
    3076:  E               ======================================================================
    3077:  E               FAIL: test_reset_game (__main__.TestGame)
    3078:  E               ----------------------------------------------------------------------
    3079:  E               Traceback (most recent call last):
    3080:  E                 File "/Users/xx/tests/test_game.py", line 13, in test_reset_game
    3081:  E                   self.assertEqual(self.game.grid, [[0 for _ in range(4)] for _ in range(4)])
    3082:  E               AssertionError: Lists differ: [[0, 0, 0, 0], [0, 2, 0, 0], [0, 0, 0, 2], [0, 0, 0, 0]] != [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
    ...
    
    3090:  E               
    3091:  E               + [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
    3092:  E               ?                      +++               ^
    3093:  E               
    3094:  E               
    3095:  E               ----------------------------------------------------------------------
    3096:  E               Ran 9 tests in 0.002s
    3097:  E               
    3098:  E               FAILED (failures=1, errors=1)
    ...
    
    3118:  E               ## Code: game.py. Write code with triple quoto, based on the following attentions and context.
    3119:  E               1. Only One file: do your best to implement THIS ONLY ONE FILE.
    3120:  E               2. COMPLETE CODE: Your code will be part of the entire project, so please implement complete, reliable, reusable code snippets.
    3121:  E               3. Set default value: If there is any setting, ALWAYS SET A DEFAULT VALUE, ALWAYS USE STRONG TYPE AND EXPLICIT VARIABLE. AVOID circular import.
    3122:  E               4. Follow design: YOU MUST FOLLOW "Data structures and interfaces". DONT CHANGE ANY DESIGN. Do not use public member functions that do not exist in your design.
    3123:  E               5. CAREFULLY CHECK THAT YOU DONT MISS ANY NECESSARY CLASS/FUNCTION IN THIS FILE.
    3124:  E               6. Before using a external variable/module, make sure you import it first.
    3125:  E               7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.
    3126:  tests/mock/mock_llm.py:114: ValueError
    ...
    
    3162:  metagpt/actions/write_code.py:142: in run
    3163:  code = await self.write_code(prompt)
    3164:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py:88: in async_wrapped
    3165:  return await fn(*args, **kwargs)
    3166:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py:47: in __call__
    3167:  do = self.iter(retry_state=retry_state)
    3168:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    3169:  self = <AsyncRetrying object at 0x7fa5686e01f0 (stop=<tenacity.stop.stop_after_attempt object at 0x7fa5686e04f0>, wait=<tenac...0x7fa57a9918e0>, before=<function before_nothing at 0x7fa57a9941f0>, after=<function after_nothing at 0x7fa57a994550>)>
    3170:  retry_state = <RetryCallState 140346794727504: attempt #6; slept for 19.7; last result: failed (ValueError In current test setting, ... using a external variable/module, make sure you import it first.
    3171:  7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.
    3172:  )>
    3173:  def iter(self, retry_state: "RetryCallState") -> t.Union[DoAttempt, DoSleep, t.Any]:  # noqa
    3174:  fut = retry_state.outcome
    3175:  if fut is None:
    3176:  if self.before is not None:
    3177:  self.before(retry_state)
    3178:  return DoAttempt()
    3179:  is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
    3180:  if not (is_explicit_retry or self.retry(retry_state)):
    3181:  return fut.result()
    3182:  if self.after is not None:
    3183:  self.after(retry_state)
    3184:  self.statistics["delay_since_first_attempt"] = retry_state.seconds_since_start
    3185:  if self.stop(retry_state):
    3186:  if self.retry_error_callback:
    3187:  return self.retry_error_callback(retry_state)
    3188:  retry_exc = self.retry_error_cls(fut)
    3189:  if self.reraise:
    3190:  raise retry_exc.reraise()
    3191:  >           raise retry_exc from fut.exception()
    3192:  E           tenacity.RetryError: RetryError[<Future at 0x7fa510874310 state=finished raised ValueError>]
    3193:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/__init__.py:326: RetryError
    ...
    
    3220:  task_doc = Document(filename="1.json", content=json.dumps(REFINED_TASK_JSON))
    3221:  context.repo = context.repo.with_src_path(context.src_workspace)
    3222:  # Ready to write gui.py
    3223:  codes = await WriteCode.get_codes(task_doc=task_doc, exclude="gui.py", project_repo=context.repo)
    3224:  codes_inc = await WriteCode.get_codes(task_doc=task_doc, exclude="gui.py", project_repo=context.repo, use_inc=True)
    3225:  logger.info(codes)
    3226:  logger.info(codes_inc)
    3227:  >       assert codes
    3228:  E       AssertionError: assert ''
    3229:  tests/metagpt/actions/test_write_code.py:158: AssertionError
    ...
    
    3245:  ----- ui.py
    3246:  ```# ui.py
    3247:  new code ...```
    3248:  ______________________________ test_write_prd_inc ______________________________
    3249:  self = <AsyncRetrying object at 0x7fa568ae9100 (stop=<tenacity.stop.stop_after_attempt object at 0x7fa568ae9b80>, wait=<tenac...ore=<function before_nothing at 0x7fa57a9941f0>, after=<function general_after_log.<locals>.log_it at 0x7fa568ac4a60>)>
    3250:  fn = <function ActionNode._aask_v1 at 0x7fa568ac4b80>
    3251:  args = (RefinedPRD, <class 'str'>, , , , {'Language': Language, <class 'str'>, Provide the language used in the project, typi...', description="Provide the language used in the project, typically matching the user's requirement language.")), ...})
    3252:  kwargs = {'images': None, 'schema': 'json', 'timeout': 0}
    3253:  retry_state = <RetryCallState 140346920562496: attempt #6; slept for 20.47; last result: failed (ValueError In current test setting,... nothing else.
    ...
    
    3275:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa510b35b80>
    3276:  msg_key = '\n## context\n\n### Legacy Content\n\n## Language\n\nen_us\n\n## Programming Language\n\nPython\n\n## Original Requir...thing else.\n\n## action\nFollow instructions of nodes, generate output and make sure it follows the format example.\n'
    3277:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa510b35b80>>
    3278:  args = ('\n## context\n\n### Legacy Content\n\n## Language\n\nen_us\n\n## Programming Language\n\nPython\n\n## Original Requi...llow instructions of nodes, generate output and make sure it follows the format example.\n', None, None, None, 0, True)
    3279:  kwargs = {}
    3280:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    3281:  if msg_key not in self.rsp_cache:
    3282:  if not self.allow_open_api_call:
    3283:  >               raise ValueError(
    3284:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    3285:  "or add expected api response in tests/data/rsp_cache.json. "
    3286:  f"The prompt you want for api call: {msg_key}"
    3287:  )
    3288:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    3430:  E               
    3431:  E               
    3432:  E               ## constraint
    3433:  E               Language: Please use the same language as Human INPUT.
    3434:  E               Format: output wrapped inside [CONTENT][/CONTENT] like format example, nothing else.
    3435:  E               
    3436:  E               ## action
    3437:  E               Follow instructions of nodes, generate output and make sure it follows the format example.
    3438:  tests/mock/mock_llm.py:114: ValueError
    ...
    
    3462:  metagpt/actions/action_node.py:456: in simple_fill
    3463:  content, scontent = await self._aask_v1(
    3464:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py:88: in async_wrapped
    3465:  return await fn(*args, **kwargs)
    3466:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py:47: in __call__
    3467:  do = self.iter(retry_state=retry_state)
    3468:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    3469:  self = <AsyncRetrying object at 0x7fa568ae9100 (stop=<tenacity.stop.stop_after_attempt object at 0x7fa568ae9b80>, wait=<tenac...ore=<function before_nothing at 0x7fa57a9941f0>, after=<function general_after_log.<locals>.log_it at 0x7fa568ac4a60>)>
    3470:  retry_state = <RetryCallState 140346920562496: attempt #6; slept for 20.47; last result: failed (ValueError In current test setting,... nothing else.
    ...
    
    3472:  Follow instructions of nodes, generate output and make sure it follows the format example.
    3473:  )>
    3474:  def iter(self, retry_state: "RetryCallState") -> t.Union[DoAttempt, DoSleep, t.Any]:  # noqa
    3475:  fut = retry_state.outcome
    3476:  if fut is None:
    3477:  if self.before is not None:
    3478:  self.before(retry_state)
    3479:  return DoAttempt()
    3480:  is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
    3481:  if not (is_explicit_retry or self.retry(retry_state)):
    3482:  return fut.result()
    3483:  if self.after is not None:
    3484:  self.after(retry_state)
    3485:  self.statistics["delay_since_first_attempt"] = retry_state.seconds_since_start
    3486:  if self.stop(retry_state):
    3487:  if self.retry_error_callback:
    3488:  return self.retry_error_callback(retry_state)
    3489:  retry_exc = self.retry_error_cls(fut)
    3490:  if self.reraise:
    3491:  raise retry_exc.reraise()
    3492:  >           raise retry_exc from fut.exception()
    3493:  E           tenacity.RetryError: RetryError[<Future at 0x7fa510586550 state=finished raised ValueError>]
    3494:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/__init__.py:326: RetryError
    3495:  ----------------------------- Captured stderr call -----------------------------
    3496:  2024-04-22 11:33:57.901 | INFO     | metagpt.utils.file_repository:save:57 - save to: /home/runner/work/MetaGPT/MetaGPT/workspace/unittest/5b2aa24f498c4df880c66aad21604381/docs/prd/1.txt
    3497:  2024-04-22 11:33:57.902 | INFO     | metagpt.utils.file_repository:save:57 - save to: /home/runner/work/MetaGPT/MetaGPT/workspace/unittest/5b2aa24f498c4df880c66aad21604381/docs/requirement.txt
    3498:  2024-04-22 11:33:57.938 | WARNING  | tests.mock.mock_llm:_mock_rsp:122 - Use response cache
    3499:  2024-04-22 11:33:57.942 | INFO     | metagpt.actions.write_prd:run:83 - Requirement update detected: 
    3500:  Adding graphical interface functionality to enhance the user experience in the number-guessing game. The existing number-guessing game currently relies on command-line input for numbers. The goal is to introduce a graphical interface to improve the game's usability and visual appeal
    3501:  2024-04-22 11:33:57.943 | ERROR    | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 0.000(s), this was the 1st time calling it. exp: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    3611:  - Refined Requirement Pool: typing.List[typing.List[str]]  # List down the top 5 to 7 requirements with their priority (P0, P1, P2). Cover both legacy content and incremental content. Retain content unrelated to incremental development
    3612:  - UI Design draft: <class 'str'>  # Provide a simple description of UI elements, functions, style, and layout.
    3613:  - Anything UNCLEAR: <class 'str'>  # Mention any aspects of the project that are unclear and try to clarify them.
    3614:  ## constraint
    3615:  Language: Please use the same language as Human INPUT.
    3616:  Format: output wrapped inside [CONTENT][/CONTENT] like format example, nothing else.
    3617:  ## action
    3618:  Follow instructions of nodes, generate output and make sure it follows the format example.
    3619:  2024-04-22 11:33:58.919 | ERROR    | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 0.976(s), this was the 2nd time calling it. exp: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    3729:  - Refined Requirement Pool: typing.List[typing.List[str]]  # List down the top 5 to 7 requirements with their priority (P0, P1, P2). Cover both legacy content and incremental content. Retain content unrelated to incremental development
    3730:  - UI Design draft: <class 'str'>  # Provide a simple description of UI elements, functions, style, and layout.
    3731:  - Anything UNCLEAR: <class 'str'>  # Mention any aspects of the project that are unclear and try to clarify them.
    3732:  ## constraint
    3733:  Language: Please use the same language as Human INPUT.
    3734:  Format: output wrapped inside [CONTENT][/CONTENT] like format example, nothing else.
    3735:  ## action
    3736:  Follow instructions of nodes, generate output and make sure it follows the format example.
    3737:  2024-04-22 11:33:59.679 | ERROR    | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 1.735(s), this was the 3rd time calling it. exp: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    3847:  - Refined Requirement Pool: typing.List[typing.List[str]]  # List down the top 5 to 7 requirements with their priority (P0, P1, P2). Cover both legacy content and incremental content. Retain content unrelated to incremental development
    3848:  - UI Design draft: <class 'str'>  # Provide a simple description of UI elements, functions, style, and layout.
    3849:  - Anything UNCLEAR: <class 'str'>  # Mention any aspects of the project that are unclear and try to clarify them.
    3850:  ## constraint
    3851:  Language: Please use the same language as Human INPUT.
    3852:  Format: output wrapped inside [CONTENT][/CONTENT] like format example, nothing else.
    3853:  ## action
    3854:  Follow instructions of nodes, generate output and make sure it follows the format example.
    3855:  2024-04-22 11:34:01.891 | ERROR    | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 3.947(s), this was the 4th time calling it. exp: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    3965:  - Refined Requirement Pool: typing.List[typing.List[str]]  # List down the top 5 to 7 requirements with their priority (P0, P1, P2). Cover both legacy content and incremental content. Retain content unrelated to incremental development
    3966:  - UI Design draft: <class 'str'>  # Provide a simple description of UI elements, functions, style, and layout.
    3967:  - Anything UNCLEAR: <class 'str'>  # Mention any aspects of the project that are unclear and try to clarify them.
    3968:  ## constraint
    3969:  Language: Please use the same language as Human INPUT.
    3970:  Format: output wrapped inside [CONTENT][/CONTENT] like format example, nothing else.
    3971:  ## action
    3972:  Follow instructions of nodes, generate output and make sure it follows the format example.
    3973:  2024-04-22 11:34:08.532 | ERROR    | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 10.589(s), this was the 5th time calling it. exp: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    4083:  - Refined Requirement Pool: typing.List[typing.List[str]]  # List down the top 5 to 7 requirements with their priority (P0, P1, P2). Cover both legacy content and incremental content. Retain content unrelated to incremental development
    4084:  - UI Design draft: <class 'str'>  # Provide a simple description of UI elements, functions, style, and layout.
    4085:  - Anything UNCLEAR: <class 'str'>  # Mention any aspects of the project that are unclear and try to clarify them.
    4086:  ## constraint
    4087:  Language: Please use the same language as Human INPUT.
    4088:  Format: output wrapped inside [CONTENT][/CONTENT] like format example, nothing else.
    4089:  ## action
    4090:  Follow instructions of nodes, generate output and make sure it follows the format example.
    4091:  2024-04-22 11:34:18.436 | ERROR    | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 20.493(s), this was the 6th time calling it. exp: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    4222:  curr_instr = ext_env.curr_step_instruction()
    4223:  assert ext_env.step_idx == 5
    4224:  assert "Werewolves, please open your eyes" in curr_instr["content"]
    4225:  # current step_idx = 5
    4226:  ext_env.wolf_kill_someone(wolf_name="Player10", player_name="Player4")
    4227:  ext_env.wolf_kill_someone(wolf_name="Player0", player_name="Player4")
    4228:  ext_env.wolf_kill_someone(wolf_name="Player1", player_name="Player4")
    4229:  >       assert ext_env.player_hunted == "Player4"
    4230:  E       AssertionError: assert None == 'Player4'
    4231:  E        +  where None = WerewolfExtEnv(action_space=<gymnasium.spaces.space.Space object at 0x7fa510cc66a0>, observation_space=<gymnasium.spac... player_hunted=None, player_protected=None, is_hunted_player_saved=False, player_poisoned=None, player_current_dead=[]).player_hunted
    4232:  tests/metagpt/environment/werewolf_env/test_werewolf_ext_env.py:48: AssertionError
    ...
    
    4259:  if not text:
    4260:  text = "this is blank"
    4261:  for idx in range(3):
    4262:  try:
    4263:  embedding = (
    4264:  OpenAI(api_key=config.llm.api_key).embeddings.create(input=[text], model=model).data[0].embedding
    4265:  )
    4266:  except Exception as exp:
    4267:  logger.info(f"get_embedding failed, exp: {exp}, will retry.")
    4268:  time.sleep(5)
    4269:  if not embedding:
    4270:  >           raise ValueError("get_embedding failed")
    4271:  E           ValueError: get_embedding failed
    4272:  metagpt/ext/stanford_town/utils/utils.py:64: ValueError
    4273:  ----------------------------- Captured stderr call -----------------------------
    4274:  huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
    4275:  To disable this warning, you can either:
    4276:  - Avoid using `tokenizers` before the fork if possible
    4277:  - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
    4278:  2024-04-22 11:34:25.201 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4279:  2024-04-22 11:34:30.312 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4280:  2024-04-22 11:34:35.421 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    ...
    
    4301:  if not text:
    4302:  text = "this is blank"
    4303:  for idx in range(3):
    4304:  try:
    4305:  embedding = (
    4306:  OpenAI(api_key=config.llm.api_key).embeddings.create(input=[text], model=model).data[0].embedding
    4307:  )
    4308:  except Exception as exp:
    4309:  logger.info(f"get_embedding failed, exp: {exp}, will retry.")
    4310:  time.sleep(5)
    4311:  if not embedding:
    4312:  >           raise ValueError("get_embedding failed")
    4313:  E           ValueError: get_embedding failed
    4314:  metagpt/ext/stanford_town/utils/utils.py:64: ValueError
    4315:  ----------------------------- Captured stderr call -----------------------------
    4316:  2024-04-22 11:34:41.295 | INFO     | metagpt.ext.stanford_town.roles.st_role:load_from:167 - Role: Isabella Rodriguez loaded role's memory from /home/runner/work/MetaGPT/MetaGPT/examples/stanford_town/storage/unittest_sim/personas/Isabella Rodriguez
    4317:  2024-04-22 11:34:41.328 | INFO     | metagpt.ext.stanford_town.roles.st_role:load_from:167 - Role: Klaus Mueller loaded role's memory from /home/runner/work/MetaGPT/MetaGPT/examples/stanford_town/storage/unittest_sim/personas/Klaus Mueller
    4318:  2024-04-22 11:34:41.329 | INFO     | metagpt.ext.stanford_town.plan.converse:agent_conversation:15 - Role: Isabella Rodriguez starts a conversation with Role: Klaus Mueller
    4319:  2024-04-22 11:34:41.329 | INFO     | metagpt.ext.stanford_town.plan.converse:agent_conversation:18 - Conv round: 0 between Isabella Rodriguez and Klaus Mueller
    4320:  2024-04-22 11:34:41.434 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4321:  2024-04-22 11:34:46.541 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4322:  2024-04-22 11:34:51.673 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    ...
    
    4339:  if not text:
    4340:  text = "this is blank"
    4341:  for idx in range(3):
    4342:  try:
    4343:  embedding = (
    4344:  OpenAI(api_key=config.llm.api_key).embeddings.create(input=[text], model=model).data[0].embedding
    4345:  )
    4346:  except Exception as exp:
    4347:  logger.info(f"get_embedding failed, exp: {exp}, will retry.")
    4348:  time.sleep(5)
    4349:  if not embedding:
    4350:  >           raise ValueError("get_embedding failed")
    4351:  E           ValueError: get_embedding failed
    4352:  metagpt/ext/stanford_town/utils/utils.py:64: ValueError
    4353:  ----------------------------- Captured stderr call -----------------------------
    4354:  2024-04-22 11:34:56.738 | WARNING  | metagpt.ext.stanford_town.utils.utils:copy_folder:220 - /home/runner/work/MetaGPT/MetaGPT/examples/stanford_town/storage/unittest_sim exist, start to remove.
    4355:  2024-04-22 11:34:56.899 | INFO     | metagpt.ext.stanford_town.roles.st_role:load_from:167 - Role: Isabella Rodriguez loaded role's memory from /home/runner/work/MetaGPT/MetaGPT/examples/stanford_town/storage/unittest_sim/personas/Isabella Rodriguez
    4356:  2024-04-22 11:34:56.931 | INFO     | metagpt.ext.stanford_town.roles.st_role:load_from:167 - Role: Klaus Mueller loaded role's memory from /home/runner/work/MetaGPT/MetaGPT/examples/stanford_town/storage/unittest_sim/personas/Klaus Mueller
    4357:  2024-04-22 11:34:57.046 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4358:  2024-04-22 11:35:02.162 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4359:  2024-04-22 11:35:07.285 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    ...
    
    4380:  if not text:
    4381:  text = "this is blank"
    4382:  for idx in range(3):
    4383:  try:
    4384:  embedding = (
    4385:  OpenAI(api_key=config.llm.api_key).embeddings.create(input=[text], model=model).data[0].embedding
    4386:  )
    4387:  except Exception as exp:
    4388:  logger.info(f"get_embedding failed, exp: {exp}, will retry.")
    4389:  time.sleep(5)
    4390:  if not embedding:
    4391:  >           raise ValueError("get_embedding failed")
    4392:  E           ValueError: get_embedding failed
    4393:  metagpt/ext/stanford_town/utils/utils.py:64: ValueError
    4394:  ----------------------------- Captured stderr call -----------------------------
    4395:  2024-04-22 11:35:12.381 | INFO     | metagpt.ext.stanford_town.roles.st_role:load_from:167 - Role: Klaus Mueller loaded role's memory from /home/runner/work/MetaGPT/MetaGPT/examples/stanford_town/storage/base_the_ville_isabella_maria_klaus/personas/Klaus Mueller
    4396:  2024-04-22 11:35:12.619 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4397:  2024-04-22 11:35:17.738 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    4398:  2024-04-22 11:35:22.848 | INFO     | metagpt.ext.stanford_town.utils.utils:get_embedding:61 - get_embedding failed, exp: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}, will retry.
    ...
    
    4479:  return self._retry_request(
    4480:  options,
    4481:  cast_to,
    4482:  retries,
    4483:  stream=stream,
    4484:  stream_cls=stream_cls,
    4485:  response_headers=None,
    4486:  )
    4487:  raise APITimeoutError(request=request) from err
    ...
    
    4490:  return self._retry_request(
    4491:  options,
    4492:  cast_to,
    4493:  retries,
    4494:  stream=stream,
    4495:  stream_cls=stream_cls,
    4496:  response_headers=None,
    4497:  )
    4498:  raise APIConnectionError(request=request) from err
    4499:  log.debug(
    4500:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    4501:  )
    4502:  try:
    4503:  response.raise_for_status()
    4504:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    4511:  err.response.headers,
    4512:  stream=stream,
    4513:  stream_cls=stream_cls,
    4514:  )
    4515:  # If the response is streamed then we need to explicitly read the response
    4516:  # to completion before attempting to access the response text.
    4517:  if not err.response.is_closed:
    4518:  err.response.read()
    4519:  >           raise self._make_status_error_from_response(err.response) from None
    4520:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    4521:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:930: AuthenticationError
    4522:  ----------------------------- Captured stderr call -----------------------------
    4523:  2024-04-22 11:35:28.279 | ERROR    | metagpt.ext.werewolf.actions.experience_operation:validate_collection:36 - delete chroma collection: test failed, exp: Collection test does not exist.
    4524:  2024-04-22 11:35:28.324 | INFO     | metagpt.ext.werewolf.actions.experience_operation:_record_experiences_local:74 - experiences saved to /home/runner/work/MetaGPT/MetaGPT/workspace/werewolf_game/experiences/test/test_01.json
    4525:  ------------------------------ Captured log call -------------------------------
    4526:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embeddings in 0.11453287137889767 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4527:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embeddings in 1.6386023486221701 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4528:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embeddings in 3.8588830921363506 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4529:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embeddings in 0.8647899972608526 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4530:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embeddings in 0.4108548079594563 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    ...
    
    4603:  return self._retry_request(
    4604:  options,
    4605:  cast_to,
    4606:  retries,
    4607:  stream=stream,
    4608:  stream_cls=stream_cls,
    4609:  response_headers=None,
    4610:  )
    4611:  raise APITimeoutError(request=request) from err
    ...
    
    4614:  return self._retry_request(
    4615:  options,
    4616:  cast_to,
    4617:  retries,
    4618:  stream=stream,
    4619:  stream_cls=stream_cls,
    4620:  response_headers=None,
    4621:  )
    4622:  raise APIConnectionError(request=request) from err
    4623:  log.debug(
    4624:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    4625:  )
    4626:  try:
    4627:  response.raise_for_status()
    4628:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    4635:  err.response.headers,
    4636:  stream=stream,
    4637:  stream_cls=stream_cls,
    4638:  )
    4639:  # If the response is streamed then we need to explicitly read the response
    4640:  # to completion before attempting to access the response text.
    4641:  if not err.response.is_closed:
    4642:  err.response.read()
    4643:  >           raise self._make_status_error_from_response(err.response) from None
    4644:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    4645:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:930: AuthenticationError
    4646:  ------------------------------ Captured log call -------------------------------
    4647:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.3119572443946952 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4648:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 1.3546945737008176 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4649:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 3.8326913528235838 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4650:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 3.1732355321330186 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4651:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 11.440235280791494 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    ...
    
    4726:  return self._retry_request(
    4727:  options,
    4728:  cast_to,
    4729:  retries,
    4730:  stream=stream,
    4731:  stream_cls=stream_cls,
    4732:  response_headers=None,
    4733:  )
    4734:  raise APITimeoutError(request=request) from err
    ...
    
    4737:  return self._retry_request(
    4738:  options,
    4739:  cast_to,
    4740:  retries,
    4741:  stream=stream,
    4742:  stream_cls=stream_cls,
    4743:  response_headers=None,
    4744:  )
    4745:  raise APIConnectionError(request=request) from err
    4746:  log.debug(
    4747:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    4748:  )
    4749:  try:
    4750:  response.raise_for_status()
    4751:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    4758:  err.response.headers,
    4759:  stream=stream,
    4760:  stream_cls=stream_cls,
    4761:  )
    4762:  # If the response is streamed then we need to explicitly read the response
    4763:  # to completion before attempting to access the response text.
    4764:  if not err.response.is_closed:
    4765:  err.response.read()
    4766:  >           raise self._make_status_error_from_response(err.response) from None
    4767:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    4768:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:930: AuthenticationError
    4769:  ------------------------------ Captured log call -------------------------------
    4770:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.07599647784305996 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4771:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 1.3812288318659607 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4772:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 2.508969582404178 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4773:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.8152104435678122 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4774:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 12.359694159219584 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    ...
    
    4848:  return self._retry_request(
    4849:  options,
    4850:  cast_to,
    4851:  retries,
    4852:  stream=stream,
    4853:  stream_cls=stream_cls,
    4854:  response_headers=None,
    4855:  )
    4856:  raise APITimeoutError(request=request) from err
    ...
    
    4859:  return self._retry_request(
    4860:  options,
    4861:  cast_to,
    4862:  retries,
    4863:  stream=stream,
    4864:  stream_cls=stream_cls,
    4865:  response_headers=None,
    4866:  )
    4867:  raise APIConnectionError(request=request) from err
    4868:  log.debug(
    4869:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    4870:  )
    4871:  try:
    4872:  response.raise_for_status()
    4873:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    4880:  err.response.headers,
    4881:  stream=stream,
    4882:  stream_cls=stream_cls,
    4883:  )
    4884:  # If the response is streamed then we need to explicitly read the response
    4885:  # to completion before attempting to access the response text.
    4886:  if not err.response.is_closed:
    4887:  err.response.read()
    4888:  >           raise self._make_status_error_from_response(err.response) from None
    4889:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    4890:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:930: AuthenticationError
    4891:  ----------------------------- Captured stderr call -----------------------------
    4892:  2024-04-22 11:36:15.284 | INFO     | tests.metagpt.ext.werewolf.actions.test_experience_operation:test_retrieve_villager_experience:146 - test retrieval with query='there are conflicts'
    4893:  ------------------------------ Captured log call -------------------------------
    4894:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.8502932390887963 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4895:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 1.2008232296336883 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4896:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.48422026026926046 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4897:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 7.8707548121173705 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    4898:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 12.522165541776314 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    ...
    
    4973:  return self._retry_request(
    4974:  options,
    4975:  cast_to,
    4976:  retries,
    4977:  stream=stream,
    4978:  stream_cls=stream_cls,
    4979:  response_headers=None,
    4980:  )
    4981:  raise APITimeoutError(request=request) from err
    ...
    
    4984:  return self._retry_request(
    4985:  options,
    4986:  cast_to,
    4987:  retries,
    4988:  stream=stream,
    4989:  stream_cls=stream_cls,
    4990:  response_headers=None,
    4991:  )
    4992:  raise APIConnectionError(request=request) from err
    4993:  log.debug(
    4994:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    4995:  )
    4996:  try:
    4997:  response.raise_for_status()
    4998:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    5005:  err.response.headers,
    5006:  stream=stream,
    5007:  stream_cls=stream_cls,
    5008:  )
    5009:  # If the response is streamed then we need to explicitly read the response
    5010:  # to completion before attempting to access the response text.
    5011:  if not err.response.is_closed:
    5012:  err.response.read()
    5013:  >           raise self._make_status_error_from_response(err.response) from None
    5014:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    5015:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:930: AuthenticationError
    5016:  ----------------------------- Captured stderr call -----------------------------
    5017:  2024-04-22 11:36:38.908 | INFO     | tests.metagpt.ext.werewolf.actions.test_experience_operation:test_retrieve_villager_experience_filtering:157 - test retrieval with excluded_version='01-10'
    5018:  ------------------------------ Captured log call -------------------------------
    5019:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.347203765308449 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    5020:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 0.8567560264694889 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    5021:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 1.4822835048722247 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    5022:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 4.047686317416623 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    5023:  WARNING  llama_index.embeddings.openai.utils:before_sleep.py:65 Retrying llama_index.embeddings.openai.base.get_embedding in 5.459698797780531 seconds as it raised AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}.
    ...
    
    5042:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa510c18e20>
    5043:  msg_key = 'You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" ...relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1\nmoon\n---\n## Paragraph 2\napple\n'
    5044:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa510c18e20>>
    5045:  args = ('## Paragraph 1\nmoon\n---\n## Paragraph 2\napple\n', ['You are a tool capable of determining whether two paragraphs ...n "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".'], None, None, 3, False)
    5046:  kwargs = {}
    5047:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    5048:  if msg_key not in self.rsp_cache:
    5049:  if not self.allow_open_api_call:
    5050:  >               raise ValueError(
    5051:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    5052:  "or add expected api response in tests/data/rsp_cache.json. "
    5053:  f"The prompt you want for api call: {msg_key}"
    5054:  )
    5055:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    5056:  E               moon
    5057:  E               ---
    5058:  E               ## Paragraph 2
    5059:  E               apple
    5060:  tests/mock/mock_llm.py:114: ValueError
    ...
    
    5063:  goal = 'What is the sum of 110 and 990?'
    5064:  async def create_plan_async(self, goal: str) -> Plan:
    5065:  """
    5066:  :param goal: The input to the planner based on which the plan is made
    5067:  :return: a Plan object
    5068:  """
    5069:  if goal is None:
    5070:  raise PlanningException(
    5071:  PlanningException.ErrorCodes.InvalidGoal, "Goal cannot be `None`."
    5072:  )
    5073:  self._logger.info(f"Finding the best function for achieving the goal: {goal}")
    5074:  self._context.variables.update(goal)
    5075:  generated_plan_raw = await self._planner_function.invoke_async(
    5076:  context=self._context
    5077:  )
    5078:  generated_plan_raw_str = str(generated_plan_raw)
    5079:  if not generated_plan_raw or not generated_plan_raw_str:
    5080:  self._logger.error("No plan has been generated.")
    5081:  raise PlanningException(
    5082:  PlanningException.ErrorCodes.CreatePlanError,
    5083:  "No plan has been generated.",
    5084:  )
    5085:  self._logger.info(f"Plan generated by ActionPlanner:\n{generated_plan_raw_str}")
    5086:  # Ignore additional text around JSON recursively
    5087:  json_regex = r"\{(?:[^{}]|(?R))*\}"
    5088:  generated_plan_str = regex.search(json_regex, generated_plan_raw_str)
    5089:  if not generated_plan_str:
    5090:  self._logger.error("No valid plan has been generated.")
    5091:  raise PlanningException(
    5092:  PlanningException.ErrorCodes.InvalidPlan,
    5093:  "No valid plan has been generated.",
    5094:  inner_exception=ValueError(generated_plan_raw_str),
    ...
    
    5100:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/semantic_kernel/planning/action_planner/action_planner.py:122: 
    5101:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    5102:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/json/__init__.py:346: in loads
    5103:  return _default_decoder.decode(s)
    5104:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/json/decoder.py:337: in decode
    5105:  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    5106:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    5107:  self = <json.decoder.JSONDecoder object at 0x7fa5989b79a0>
    5108:  s = "{'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"
    ...
    
    5111:  """Decode a JSON document from ``s`` (a ``str`` beginning with
    5112:  a JSON document) and return a 2-tuple of the Python
    5113:  representation and the index in ``s`` where the document ended.
    5114:  This can be used to decode a JSON document from a string that may
    5115:  have extraneous data at the end.
    5116:  """
    5117:  try:
    5118:  >           obj, end = self.scan_once(s, idx)
    5119:  E           json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
    5120:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/json/decoder.py:353: JSONDecodeError
    ...
    
    5140:  goal = 'What is the sum of 110 and 990?'
    5141:  async def create_plan_async(self, goal: str) -> Plan:
    5142:  """
    5143:  :param goal: The input to the planner based on which the plan is made
    5144:  :return: a Plan object
    5145:  """
    5146:  if goal is None:
    5147:  raise PlanningException(
    5148:  PlanningException.ErrorCodes.InvalidGoal, "Goal cannot be `None`."
    5149:  )
    5150:  self._logger.info(f"Finding the best function for achieving the goal: {goal}")
    5151:  self._context.variables.update(goal)
    5152:  generated_plan_raw = await self._planner_function.invoke_async(
    5153:  context=self._context
    5154:  )
    5155:  generated_plan_raw_str = str(generated_plan_raw)
    5156:  if not generated_plan_raw or not generated_plan_raw_str:
    5157:  self._logger.error("No plan has been generated.")
    5158:  raise PlanningException(
    5159:  PlanningException.ErrorCodes.CreatePlanError,
    5160:  "No plan has been generated.",
    5161:  )
    5162:  self._logger.info(f"Plan generated by ActionPlanner:\n{generated_plan_raw_str}")
    5163:  # Ignore additional text around JSON recursively
    5164:  json_regex = r"\{(?:[^{}]|(?R))*\}"
    5165:  generated_plan_str = regex.search(json_regex, generated_plan_raw_str)
    5166:  if not generated_plan_str:
    5167:  self._logger.error("No valid plan has been generated.")
    5168:  raise PlanningException(
    5169:  PlanningException.ErrorCodes.InvalidPlan,
    5170:  "No valid plan has been generated.",
    5171:  inner_exception=ValueError(generated_plan_raw_str),
    5172:  )
    5173:  generated_plan_str = generated_plan_str.group()
    5174:  generated_plan_str = generated_plan_str.replace('""', '"')
    5175:  try:
    5176:  generated_plan = json.loads(generated_plan_str)
    5177:  except json.decoder.JSONDecodeError as e:
    5178:  self._logger.error("Encountered an error while parsing Plan JSON.")
    5179:  self._logger.error(e)
    5180:  >           raise PlanningException(
    5181:  PlanningException.ErrorCodes.InvalidPlan,
    5182:  "Encountered an error while parsing Plan JSON.",
    5183:  )
    5184:  E           semantic_kernel.planning.planning_exception.PlanningException: (<ErrorCodes.InvalidPlan: 1>, 'Encountered an error while parsing Plan JSON.', None)
    ...
    
    5197:  # using BasicPlanner
    5198:  role.put_message(Message(content=task, cause_by=UserRequirement))
    5199:  await role._observe()
    5200:  await role._think()
    5201:  # assuming sk_agent will think he needs WriterSkill.Brainstorm and WriterSkill.Translate
    5202:  >       assert "WriterSkill.Brainstorm" in role.plan.generated_plan.result
    5203:  E       assert 'WriterSkill.Brainstorm' in ''
    5204:  E        +  where '' = SKContext(memory=NullMemory(), variables=ContextVariables(variables={'input': '', 'goal': "\n        Tomorrow is Valen...TION_PARAM_NAME_REGEX='^[0-9A-Za-z_]*$', FUNCTION_NAME_REGEX='^[0-9A-Za-z_]*$', SKILL_NAME_REGEX='^[0-9A-Za-z_]*$')}})).result
    5205:  E        +    where SKContext(memory=NullMemory(), variables=ContextVariables(variables={'input': '', 'goal': "\n        Tomorrow is Valen...TION_PARAM_NAME_REGEX='^[0-9A-Za-z_]*$', FUNCTION_NAME_REGEX='^[0-9A-Za-z_]*$', SKILL_NAME_REGEX='^[0-9A-Za-z_]*$')}})) = Prompt: \nYou are a planner for the Semantic Kernel.\nYour job is to create a properly formatted JSON plan step by step,.../platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}")).generated_plan
    5206:  E        +      where Prompt: \nYou are a planner for the Semantic Kernel.\nYour job is to create a properly formatted JSON plan step by step,.../platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}")) = SkAgent(private_context=None, private_config=None, private_llm=<metagpt.provider.openai_api.OpenAILLM object at 0x7fa5...810e580>>, import_skill=<bound method Kernel.import_skill of <semantic_kernel.kernel.Kernel object at 0x7fa4c810e580>>).plan
    5207:  tests/metagpt/planner/test_basic_planner.py:35: AssertionError
    5208:  ----------------------------- Captured stderr call -----------------------------
    5209:  2024-04-22 11:36:54.411 | INFO     | metagpt.roles.sk_agent:_think:72 - Error: (<ErrorCodes.ServiceError: 6>, "<class 'semantic_kernel.connectors.ai.open_ai.services.open_ai_chat_completion.OpenAIChatCompletion'> service failed to complete the prompt", AuthenticationError("Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"))
    ...
    
    5254:  return await self._retry_request(
    5255:  options,
    5256:  cast_to,
    5257:  retries,
    5258:  stream=stream,
    5259:  stream_cls=stream_cls,
    5260:  response_headers=None,
    5261:  )
    5262:  raise APITimeoutError(request=request) from err
    ...
    
    5265:  return await self._retry_request(
    5266:  options,
    5267:  cast_to,
    5268:  retries,
    5269:  stream=stream,
    5270:  stream_cls=stream_cls,
    5271:  response_headers=None,
    5272:  )
    5273:  raise APIConnectionError(request=request) from err
    5274:  log.debug(
    5275:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    5276:  )
    5277:  try:
    5278:  response.raise_for_status()
    5279:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    5286:  err.response.headers,
    5287:  stream=stream,
    5288:  stream_cls=stream_cls,
    5289:  )
    5290:  # If the response is streamed then we need to explicitly read the response
    5291:  # to completion before attempting to access the response text.
    5292:  if not err.response.is_closed:
    5293:  await err.response.aread()
    5294:  >           raise self._make_status_error_from_response(err.response) from None
    5295:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    5296:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:1392: AuthenticationError
    5297:  ------------------------------ Captured log call -------------------------------
    5298:  ERROR    asyncio:base_events.py:1753 Fatal error on SSL transport
    5299:  protocol: <asyncio.sslproto.SSLProtocol object at 0x7fa4a8496a30>
    5300:  transport: <_SelectorSocketTransport closing fd=147>
    5301:  Traceback (most recent call last):
    5302:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/selector_events.py", line 916, in write
    5303:  n = self._sock.send(data)
    5304:  OSError: [Errno 9] Bad file descriptor
    5305:  During handling of the above exception, another exception occurred:
    5306:  Traceback (most recent call last):
    5307:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/sslproto.py", line 690, in _process_write_backlog
    5308:  self._transport.write(chunk)
    5309:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/selector_events.py", line 922, in write
    5310:  self._fatal_error(exc, 'Fatal write error on socket transport')
    5311:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/selector_events.py", line 717, in _fatal_error
    5312:  self._force_close(exc)
    5313:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/selector_events.py", line 729, in _force_close
    5314:  self._loop.call_soon(self._call_connection_lost, exc)
    5315:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/base_events.py", line 751, in call_soon
    5316:  self._check_closed()
    5317:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/asyncio/base_events.py", line 515, in _check_closed
    5318:  ##[error]    raise RuntimeError('Event loop is closed')
    5319:  RuntimeError: Event loop is closed
    ...
    
    5361:  return await self._retry_request(
    5362:  options,
    5363:  cast_to,
    5364:  retries,
    5365:  stream=stream,
    5366:  stream_cls=stream_cls,
    5367:  response_headers=None,
    5368:  )
    5369:  raise APITimeoutError(request=request) from err
    ...
    
    5372:  return await self._retry_request(
    5373:  options,
    5374:  cast_to,
    5375:  retries,
    5376:  stream=stream,
    5377:  stream_cls=stream_cls,
    5378:  response_headers=None,
    5379:  )
    5380:  raise APIConnectionError(request=request) from err
    5381:  log.debug(
    5382:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    5383:  )
    5384:  try:
    5385:  response.raise_for_status()
    5386:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    5393:  err.response.headers,
    5394:  stream=stream,
    5395:  stream_cls=stream_cls,
    5396:  )
    5397:  # If the response is streamed then we need to explicitly read the response
    5398:  # to completion before attempting to access the response text.
    5399:  if not err.response.is_closed:
    5400:  await err.response.aread()
    5401:  >           raise self._make_status_error_from_response(err.response) from None
    5402:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    5403:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:1392: AuthenticationError
    ...
    
    5446:  return await self._retry_request(
    5447:  options,
    5448:  cast_to,
    5449:  retries,
    5450:  stream=stream,
    5451:  stream_cls=stream_cls,
    5452:  response_headers=None,
    5453:  )
    5454:  raise APITimeoutError(request=request) from err
    ...
    
    5457:  return await self._retry_request(
    5458:  options,
    5459:  cast_to,
    5460:  retries,
    5461:  stream=stream,
    5462:  stream_cls=stream_cls,
    5463:  response_headers=None,
    5464:  )
    5465:  raise APIConnectionError(request=request) from err
    5466:  log.debug(
    5467:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    5468:  )
    5469:  try:
    5470:  response.raise_for_status()
    5471:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    5478:  err.response.headers,
    5479:  stream=stream,
    5480:  stream_cls=stream_cls,
    5481:  )
    5482:  # If the response is streamed then we need to explicitly read the response
    5483:  # to completion before attempting to access the response text.
    5484:  if not err.response.is_closed:
    5485:  await err.response.aread()
    5486:  >           raise self._make_status_error_from_response(err.response) from None
    5487:  E           openai.AuthenticationError: Error code: 401 - {'error': {'code': 'invalid_api_key', 'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'param': None, 'type': 'invalid_request_error'}}
    5488:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:1392: AuthenticationError
    ...
    
    5500:  (mock_azure_embedding, EmbeddingType.AZURE),
    5501:  (mock_gemini_embedding, EmbeddingType.GEMINI),
    5502:  (mock_ollama_embedding, EmbeddingType.OLLAMA),
    5503:  ],
    5504:  )
    5505:  def test_get_rag_embedding(self, mock_func, embedding_type, mocker):
    5506:  # Mock
    5507:  >       mock = mock_func(mocker)
    5508:  E       TypeError: 'staticmethod' object is not callable
    5509:  tests/metagpt/rag/factories/test_embedding.py:46: TypeError
    ...
    
    5521:  (mock_azure_embedding, EmbeddingType.AZURE),
    5522:  (mock_gemini_embedding, EmbeddingType.GEMINI),
    5523:  (mock_ollama_embedding, EmbeddingType.OLLAMA),
    5524:  ],
    5525:  )
    5526:  def test_get_rag_embedding(self, mock_func, embedding_type, mocker):
    5527:  # Mock
    5528:  >       mock = mock_func(mocker)
    5529:  E       TypeError: 'staticmethod' object is not callable
    5530:  tests/metagpt/rag/factories/test_embedding.py:46: TypeError
    ...
    
    5542:  (mock_azure_embedding, EmbeddingType.AZURE),
    5543:  (mock_gemini_embedding, EmbeddingType.GEMINI),
    5544:  (mock_ollama_embedding, EmbeddingType.OLLAMA),
    5545:  ],
    5546:  )
    5547:  def test_get_rag_embedding(self, mock_func, embedding_type, mocker):
    5548:  # Mock
    5549:  >       mock = mock_func(mocker)
    5550:  E       TypeError: 'staticmethod' object is not callable
    5551:  tests/metagpt/rag/factories/test_embedding.py:46: TypeError
    ...
    
    5563:  (mock_azure_embedding, EmbeddingType.AZURE),
    5564:  (mock_gemini_embedding, EmbeddingType.GEMINI),
    5565:  (mock_ollama_embedding, EmbeddingType.OLLAMA),
    5566:  ],
    5567:  )
    5568:  def test_get_rag_embedding(self, mock_func, embedding_type, mocker):
    5569:  # Mock
    5570:  >       mock = mock_func(mocker)
    5571:  E       TypeError: 'staticmethod' object is not callable
    5572:  tests/metagpt/rag/factories/test_embedding.py:46: TypeError
    ...
    
    5584:  (mock_azure_embedding, EmbeddingType.AZURE),
    5585:  (mock_gemini_embedding, EmbeddingType.GEMINI),
    5586:  (mock_ollama_embedding, EmbeddingType.OLLAMA),
    5587:  ],
    5588:  )
    5589:  def test_get_rag_embedding(self, mock_func, embedding_type, mocker):
    5590:  # Mock
    5591:  >       mock = mock_func(mocker)
    5592:  E       TypeError: 'staticmethod' object is not callable
    5593:  tests/metagpt/rag/factories/test_embedding.py:46: TypeError
    ...
    
    5605:  (mock_azure_embedding, EmbeddingType.AZURE),
    5606:  (mock_gemini_embedding, EmbeddingType.GEMINI),
    5607:  (mock_ollama_embedding, EmbeddingType.OLLAMA),
    5608:  ],
    5609:  )
    5610:  def test_get_rag_embedding(self, mock_func, embedding_type, mocker):
    5611:  # Mock
    5612:  >       mock = mock_func(mocker)
    5613:  E       TypeError: 'staticmethod' object is not callable
    5614:  tests/metagpt/rag/factories/test_embedding.py:46: TypeError
    ...
    
    5698:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa4e87ad220>
    5699:  msg_key = 'You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" ...### Paragraph 1\n\nwho is tulin\nThe one who eaten a poison apple.\n---\n## Paragraph 2\nDo you have a poison apple?\n'
    5700:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa4e87ad220>>
    5701:  args = ('## Paragraph 1\n\nwho is tulin\nThe one who eaten a poison apple.\n---\n## Paragraph 2\nDo you have a poison apple?\...n "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".'], None, None, 3, False)
    5702:  kwargs = {}
    5703:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    5704:  if msg_key not in self.rsp_cache:
    5705:  if not self.allow_open_api_call:
    5706:  >               raise ValueError(
    5707:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    5708:  "or add expected api response in tests/data/rsp_cache.json. "
    5709:  f"The prompt you want for api call: {msg_key}"
    5710:  )
    5711:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    5712:  E               
    5713:  E               who is tulin
    5714:  E               The one who eaten a poison apple.
    5715:  E               ---
    5716:  E               ## Paragraph 2
    5717:  E               Do you have a poison apple?
    5718:  tests/mock/mock_llm.py:114: ValueError
    ...
    
    5774:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa508e886a0>
    5775:  msg_key = 'You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" ...draw me an picture?\nYes, of course. What do you want me to draw\ndraw apple\n---\n## Paragraph 2\nDraw me an apple.\n'
    5776:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa508e886a0>>
    5777:  args = ('## Paragraph 1\n\ncan you draw me an picture?\nYes, of course. What do you want me to draw\ndraw apple\n---\n## Para...n "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".'], None, None, 3, False)
    5778:  kwargs = {}
    5779:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    5780:  if msg_key not in self.rsp_cache:
    5781:  if not self.allow_open_api_call:
    5782:  >               raise ValueError(
    5783:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    5784:  "or add expected api response in tests/data/rsp_cache.json. "
    5785:  f"The prompt you want for api call: {msg_key}"
    5786:  )
    5787:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    5788:  E               
    5789:  E               can you draw me an picture?
    5790:  E               Yes, of course. What do you want me to draw
    5791:  E               draw apple
    5792:  E               ---
    5793:  E               ## Paragraph 2
    5794:  E               Draw me an apple.
    5795:  tests/mock/mock_llm.py:114: ValueError
    ...
    
    5802:  To disable this warning, you can either:
    5803:  - Avoid using `tokenizers` before the fork if possible
    5804:  - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
    5805:  huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
    5806:  To disable this warning, you can either:
    5807:  - Avoid using `tokenizers` before the fork if possible
    5808:  - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
    5809:  ----------------------------- Captured stderr call -----------------------------
    5810:  2024-04-22 11:37:22.766 | WARNING  | metagpt.utils.redis:_connect:37 - Redis initialization has failed:'NoneType' object has no attribute 'to_url'
    5811:  ________________________________ test_engineer _________________________________
    5812:  self = <AsyncRetrying object at 0x7fa5686e01f0 (stop=<tenacity.stop.stop_after_attempt object at 0x7fa5686e04f0>, wait=<tenac...0x7fa57a9918e0>, before=<function before_nothing at 0x7fa57a9941f0>, after=<function after_nothing at 0x7fa57a994550>)>
    5813:  fn = <function WriteCode.write_code at 0x7fa568617dc0>
    5814:  args = (WriteCode, "\nNOTICE\nRole: You are a professional engineer; the main goal is to write google-style, elegant, modular...ing a external variable/module, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n\n")
    5815:  kwargs = {}
    5816:  retry_state = <RetryCallState 140345707715696: attempt #6; slept for 20.77; last result: failed (ValueError In current test setting,... using a external variable/module, make sure you import it first.
    ...
    
    5839:  self = <tests.mock.mock_llm.MockLLM object at 0x7fa4e878dd00>
    5840:  msg_key = "\nNOTICE\nRole: You are a professional engineer; the main goal is to write google-style, elegant, modular, easy to re...sing a external variable/module, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n\n"
    5841:  ask_func = <bound method MockLLM.original_aask of <tests.mock.mock_llm.MockLLM object at 0x7fa4e878dd00>>
    5842:  args = ("\nNOTICE\nRole: You are a professional engineer; the main goal is to write google-style, elegant, modular, easy to r...ule, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.\n\n", None, None, None, 3, True)
    5843:  kwargs = {}
    5844:  async def _mock_rsp(self, msg_key, ask_func, *args, **kwargs):
    5845:  if msg_key not in self.rsp_cache:
    5846:  if not self.allow_open_api_call:
    5847:  >               raise ValueError(
    5848:  "In current test setting, api call is not allowed, you should properly mock your tests, "
    5849:  "or add expected api response in tests/data/rsp_cache.json. "
    5850:  f"The prompt you want for api call: {msg_key}"
    5851:  )
    5852:  E               ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    5996:  E               ## Code: smart_search_engine/index.py. Write code with triple quoto, based on the following attentions and context.
    5997:  E               1. Only One file: do your best to implement THIS ONLY ONE FILE.
    5998:  E               2. COMPLETE CODE: Your code will be part of the entire project, so please implement complete, reliable, reusable code snippets.
    5999:  E               3. Set default value: If there is any setting, ALWAYS SET A DEFAULT VALUE, ALWAYS USE STRONG TYPE AND EXPLICIT VARIABLE. AVOID circular import.
    6000:  E               4. Follow design: YOU MUST FOLLOW "Data structures and interfaces". DONT CHANGE ANY DESIGN. Do not use public member functions that do not exist in your design.
    6001:  E               5. CAREFULLY CHECK THAT YOU DONT MISS ANY NECESSARY CLASS/FUNCTION IN THIS FILE.
    6002:  E               6. Before using a external variable/module, make sure you import it first.
    6003:  E               7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.
    6004:  tests/mock/mock_llm.py:114: ValueError
    6005:  The above exception was the direct cause of the following exception:
    6006:  self = Engineer(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_name='',...Code, WriteCode, WriteCode, WriteCode, WriteCode], summarize_todos=[], next_todo_action='SummarizeCode', n_summarize=0)
    6007:  args = (user: ,), kwargs = {}
    6008:  last_error = ValueError('In current test setting, api call is not allowed, you should properly mock your tests, or add expected api...ng a external variable/module, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON\'T LEAVE TODO.\n\n')
    6009:  name = 'builtins.ValueError'
    ...
    
    6027:  metagpt/actions/write_code.py:142: in run
    6028:  code = await self.write_code(prompt)
    6029:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py:88: in async_wrapped
    6030:  return await fn(*args, **kwargs)
    6031:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py:47: in __call__
    6032:  do = self.iter(retry_state=retry_state)
    6033:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    6034:  self = <AsyncRetrying object at 0x7fa5686e01f0 (stop=<tenacity.stop.stop_after_attempt object at 0x7fa5686e04f0>, wait=<tenac...0x7fa57a9918e0>, before=<function before_nothing at 0x7fa57a9941f0>, after=<function after_nothing at 0x7fa57a994550>)>
    6035:  retry_state = <RetryCallState 140345707715696: attempt #6; slept for 20.77; last result: failed (ValueError In current test setting,... using a external variable/module, make sure you import it first.
    6036:  7. Write out EVERY CODE DETAIL, DON'T LEAVE TODO.
    6037:  )>
    6038:  def iter(self, retry_state: "RetryCallState") -> t.Union[DoAttempt, DoSleep, t.Any]:  # noqa
    6039:  fut = retry_state.outcome
    6040:  if fut is None:
    6041:  if self.before is not None:
    6042:  self.before(retry_state)
    6043:  return DoAttempt()
    6044:  is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
    6045:  if not (is_explicit_retry or self.retry(retry_state)):
    6046:  return fut.result()
    6047:  if self.after is not None:
    6048:  self.after(retry_state)
    6049:  self.statistics["delay_since_first_attempt"] = retry_state.seconds_since_start
    6050:  if self.stop(retry_state):
    6051:  if self.retry_error_callback:
    6052:  return self.retry_error_callback(retry_state)
    6053:  retry_exc = self.retry_error_cls(fut)
    6054:  if self.reraise:
    6055:  raise retry_exc.reraise()
    6056:  >           raise retry_exc from fut.exception()
    6057:  E           tenacity.RetryError: RetryError[<Future at 0x7fa4c80dd8e0 state=finished raised ValueError>]
    6058:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/__init__.py:326: RetryError
    ...
    
    6067:  await context.repo.docs.system_design.save(rqno, content=MockMessages.system_design.content)
    6068:  await context.repo.docs.task.save(rqno, content=MockMessages.json_tasks.content)
    6069:  engineer = Engineer(context=context)
    6070:  >       rsp = await engineer.run(Message(content="", cause_by=WriteTasks))
    6071:  tests/metagpt/roles/test_engineer.py:35: 
    6072:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    6073:  self = Engineer(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_name='',...Code, WriteCode, WriteCode, WriteCode, WriteCode], summarize_todos=[], next_todo_action='SummarizeCode', n_summarize=0)
    6074:  args = (user: ,), kwargs = {}
    6075:  last_error = ValueError('In current test setting, api call is not allowed, you should properly mock your tests, or add expected api...ng a external variable/module, make sure you import it first.\n7. Write out EVERY CODE DETAIL, DON\'T LEAVE TODO.\n\n')
    6076:  name = 'builtins.ValueError'
    6077:  async def wrapper(self, *args, **kwargs):
    6078:  try:
    6079:  return await func(self, *args, **kwargs)
    6080:  except KeyboardInterrupt as kbi:
    6081:  logger.error(f"KeyboardInterrupt: {kbi} occurs, start to serialize the project")
    ...
    
    6087:  if self.latest_observed_msg:
    6088:  logger.warning(
    6089:  "There is a exception in role's execution, in order to resume, "
    6090:  "we delete the newest role communication message in the role's memory."
    6091:  )
    6092:  # remove role newest observed msg to make it observed again
    6093:  self.rc.memory.delete(self.latest_observed_msg)
    6094:  # raise again to make it captured outside
    6095:  if isinstance(e, RetryError):
    6096:  last_error = e.last_attempt._exception
    6097:  name = any_to_str(last_error)
    6098:  if re.match(r"^openai\.", name) or re.match(r"^httpx\.", name):
    6099:  raise last_error
    ...
    
    6103:  E               result = await fn(*args, **kwargs)
    6104:  E             File "/home/runner/work/MetaGPT/MetaGPT/metagpt/actions/write_code.py", line 89, in write_code
    6105:  E               code_rsp = await self._aask(prompt)
    6106:  E             File "/home/runner/work/MetaGPT/MetaGPT/metagpt/actions/action.py", line 93, in _aask
    6107:  E               return await self.llm.aask(prompt, system_msgs)
    6108:  E             File "/home/runner/work/MetaGPT/MetaGPT/tests/mock/mock_llm.py", line 98, in aask
    6109:  E               rsp = await self._mock_rsp(msg_key, self.original_aask, msg, system_msgs, format_msgs, images, timeout, stream)
    6110:  E             File "/home/runner/work/MetaGPT/MetaGPT/tests/mock/mock_llm.py", line 114, in _mock_rsp
    6111:  E               raise ValueError(
    6112:  E           ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    6284:  E             File "/home/runner/work/MetaGPT/MetaGPT/metagpt/actions/write_code.py", line 142, in run
    6285:  E               code = await self.write_code(prompt)
    6286:  E             File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    6287:  E               return await fn(*args, **kwargs)
    6288:  E             File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in __call__
    6289:  E               do = self.iter(retry_state=retry_state)
    6290:  E             File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in iter
    6291:  E               raise retry_exc from fut.exception()
    6292:  E           tenacity.RetryError: RetryError[<Future at 0x7fa4c80dd8e0 state=finished raised ValueError>]
    ...
    
    6334:  ________________________________ test_add_role _________________________________
    6335:  env = Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>, observation_space=<gymnasium.spaces....0125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}})))
    6336:  def test_add_role(env: Environment):
    6337:  role = ProductManager(
    6338:  name="Alice", profile="product manager", goal="create a new product", constraints="limited resources"
    6339:  )
    6340:  env.add_role(role)
    6341:  >       assert env.get_role(str(role._setting)) == role
    6342:  E       AssertionError: assert None == ProductManager(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_na...t'>, {}), ignore_id=False), auto_run=False), recovered=False, latest_observed_msg=None, todo_action='PrepareDocuments')
    6343:  E        +  where None = <bound method Environment.get_role of Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>...125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}})))>('Alice(product manager)')
    6344:  E        +    where <bound method Environment.get_role of Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>...125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}})))> = Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>, observation_space=<gymnasium.spaces....0125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}}))).get_role
    6345:  E        +    and   'Alice(product manager)' = str('Alice(product manager)')
    6346:  E        +      where 'Alice(product manager)' = ProductManager(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_na...t'>, {}), ignore_id=False), auto_run=False), recovered=False, latest_observed_msg=None, todo_action='PrepareDocuments')._setting
    6347:  tests/metagpt/test_environment.py:32: AssertionError
    ...
    
    6391:  return await self._retry_request(
    6392:  options,
    6393:  cast_to,
    6394:  retries,
    6395:  stream=stream,
    6396:  stream_cls=stream_cls,
    6397:  response_headers=None,
    6398:  )
    6399:  raise APITimeoutError(request=request) from err
    ...
    
    6402:  return await self._retry_request(
    6403:  options,
    6404:  cast_to,
    6405:  retries,
    6406:  stream=stream,
    6407:  stream_cls=stream_cls,
    6408:  response_headers=None,
    6409:  )
    6410:  raise APIConnectionError(request=request) from err
    6411:  log.debug(
    6412:  'HTTP Request: %s %s "%i %s"', request.method, request.url, response.status_code, response.reason_phrase
    6413:  )
    6414:  try:
    6415:  response.raise_for_status()
    6416:  except httpx.HTTPStatusError as err:  # thrown on 4xx and 5xx status code
    ...
    
    6423:  err.response.headers,
    6424:  stream=stream,
    6425:  stream_cls=stream_cls,
    6426:  )
    6427:  # If the response is streamed then we need to explicitly read the response
    6428:  # to completion before attempting to access the response text.
    6429:  if not err.response.is_closed:
    6430:  await err.response.aread()
    6431:  >           raise self._make_status_error_from_response(err.response) from None
    6432:  E           openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6433:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py:1392: AuthenticationError
    ...
    
    6451:  _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    6452:  self = <metagpt.tools.moderation.Moderation object at 0x7fa4e8423e20>
    6453:  content = ['I will kill you', 'The weather is really nice today', 'I want to hit you']
    6454:  async def amoderation(self, content: Union[str, list[str]]):
    6455:  resp = []
    6456:  if content:
    6457:  moderation_results = await self.llm.amoderation(content=content)
    6458:  >           results = moderation_results.results
    6459:  E           AttributeError: 'NoneType' object has no attribute 'results'
    6460:  metagpt/tools/moderation.py:36: AttributeError
    6461:  ---------------------------- Captured stderr setup -----------------------------
    6462:  INFO:     Shutting down
    6463:  ----------------------------- Captured stderr call -----------------------------
    6464:  INFO:     Waiting for application shutdown.
    6465:  INFO:     Application shutdown complete.
    6466:  INFO:     Finished server process [11136]
    6467:  2024-04-22 11:39:56.002 | ERROR    | metagpt.tools.moderation:amoderation:35 - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}: , 
    ...
    
    6473:  return await self.aclient.moderations.create(input=content)
    6474:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/resources/moderations.py", line 123, in create
    6475:  return await self._post(
    6476:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py", line 1536, in post
    6477:  return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
    6478:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py", line 1315, in request
    6479:  return await self._request(
    6480:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py", line 1392, in _request
    6481:  raise self._make_status_error_from_response(err.response) from None
    6482:  openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6483:  ------------------------------ Captured log call -------------------------------
    6484:  ERROR    metagpt.tools.moderation:moderation.py:35 Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}: , 
    ...
    
    6490:  return await self.aclient.moderations.create(input=content)
    6491:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/resources/moderations.py", line 123, in create
    6492:  return await self._post(
    6493:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py", line 1536, in post
    6494:  return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
    6495:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py", line 1315, in request
    6496:  return await self._request(
    6497:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/openai/_base_client.py", line 1392, in _request
    6498:  raise self._make_status_error_from_response(err.response) from None
    6499:  openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    ...
    
    6537:  rsp = await search_engine.run("metagpt", max_results, as_string)
    6538:  logger.info(rsp)
    6539:  if as_string:
    6540:  assert isinstance(rsp, str)
    6541:  else:
    6542:  assert isinstance(rsp, list)
    6543:  assert len(rsp) <= max_results
    6544:  >       await test(SearchEngine(**search_engine_config))
    6545:  E       pydantic_core._pydantic_core.ValidationError: 1 validation error for SearchEngine
    6546:  E       api_key
    6547:  E         Field required [type=missing, input_value={}, input_type=dict]
    6548:  E           For further information visit https://errors.pydantic.dev/2.5/v/missing
    6549:  tests/metagpt/tools/test_search_engine.py:72: ValidationError
    ...
    
    6577:  retcode = 1
    6578:  def run(*popenargs,
    6579:  input=None, capture_output=False, timeout=None, check=False, **kwargs):
    6580:  """Run command with arguments and return a CompletedProcess instance.
    6581:  The returned instance will have attributes args, returncode, stdout and
    6582:  stderr. By default, stdout and stderr are not captured, and those attributes
    6583:  will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.
    6584:  If check is True and the exit code was non-zero, it raises a
    6585:  CalledProcessError. The CalledProcessError object will have the return code
    ...
    
    6590:  There is an optional argument "input", allowing you to
    6591:  pass bytes or a string to the subprocess's stdin.  If you use this argument
    6592:  you may not also use the Popen constructor's "stdin" argument, as
    6593:  it will be used internally.
    6594:  By default, all communication is in bytes, and therefore any "input" should
    6595:  be bytes, and the stdout and stderr will be bytes. If in text mode, any
    6596:  "input" should be a string, and stdout and stderr will be strings decoded
    6597:  according to locale encoding, or by "encoding" if set. Text mode is
    6598:  triggered by setting any of text, encoding, errors or universal_newlines.
    6599:  The other arguments are the same as for the Popen constructor.
    6600:  """
    6601:  if input is not None:
    6602:  if kwargs.get('stdin') is not None:
    6603:  raise ValueError('stdin and input arguments may not both be used.')
    6604:  kwargs['stdin'] = PIPE
    6605:  if capture_output:
    6606:  if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
    6607:  raise ValueError('stdout and stderr arguments may not be used '
    ...
    
    6626:  process.wait()
    6627:  raise
    6628:  except:  # Including KeyboardInterrupt, communicate handled that.
    6629:  process.kill()
    6630:  # We don't call process.wait() as .__exit__ does that for us.
    6631:  raise
    6632:  retcode = process.poll()
    6633:  if check and retcode:
    6634:  >               raise CalledProcessError(retcode, process.args,
    6635:  output=stdout, stderr=stderr)
    6636:  E               subprocess.CalledProcessError: Command '['tree', '--gitfile', '/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../../../.gitignore', '/home/runner/work/MetaGPT/MetaGPT/tests']' returned non-zero exit status 1.
    6637:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/subprocess.py:528: CalledProcessError
    ...
    
    6644:  ../../../../../opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/jupyter_client/connect.py:22
    6645:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/jupyter_client/connect.py:22: DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs
    6646:  given by the platformdirs library.  To remove this warning and
    6647:  see the appropriate new directories, set the environment variable
    6648:  `JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`.
    6649:  The use of platformdirs will be the default in `jupyter_core` v6
    6650:  from jupyter_core.paths import jupyter_data_dir, jupyter_runtime_dir, secure_write
    6651:  tests/metagpt/actions/test_write_code.py::test_write_code_deps
    6652:  /home/runner/work/MetaGPT/MetaGPT/tests/metagpt/actions/test_write_code.py:93: PydanticDeprecatedSince20: The `json` method is deprecated; use `model_dump_json` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6653:  coding_doc = Document(root_path="snake1", filename="game.py", content=ccontext.json())
    6654:  tests/metagpt/actions/test_write_code.py::test_write_code_deps
    6655:  tests/metagpt/actions/test_write_code.py::test_write_refined_code
    6656:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6657:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6658:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6659:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6660:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/pydantic/main.py:1005: PydanticDeprecatedSince20: The `json` method is deprecated; use `model_dump_json` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6661:  warnings.warn('The `json` method is deprecated; use `model_dump_json` instead.', DeprecationWarning)
    6662:  tests/metagpt/actions/test_write_code.py::test_write_refined_code
    6663:  /home/runner/work/MetaGPT/MetaGPT/tests/metagpt/actions/test_write_code.py:123: PydanticDeprecatedSince20: The `json` method is deprecated; use `model_dump_json` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    ...
    
    6682:  See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.
    6683:  tests/metagpt/learn/test_text_to_image.py::test_openai_text_to_image
    6684:  tests/metagpt/tools/test_openai_text_to_image.py::test_draw
    6685:  /home/runner/work/MetaGPT/MetaGPT/metagpt/tools/openai_text_to_image.py:47: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited
    6686:  response.raise_for_status()  # 如果是 4xx 或 5xx 响应,会引发异常
    6687:  Enable tracemalloc to get traceback where the object was allocated.
    6688:  See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.
    6689:  tests/metagpt/learn/test_text_to_speech.py::test_azure_text_to_speech
    6690:  /home/runner/work/MetaGPT/MetaGPT/tests/metagpt/learn/test_text_to_speech.py:41: PydanticDeprecatedSince20: The copy method is deprecated; use `model_copy` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6691:  config.copy()
    6692:  tests/metagpt/learn/test_text_to_speech.py::test_azure_text_to_speech
    6693:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/pydantic/main.py:1175: PydanticDeprecatedSince20: The `copy` method is deprecated; use `model_copy` instead. See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    ...
    
    6696:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6697:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6698:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6699:  tests/metagpt/planner/test_basic_planner.py::test_basic_planner
    6700:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6701:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6702:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6703:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6704:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/inspect.py:351: PydanticDeprecatedSince20: The `__fields__` attribute is deprecated, use `model_fields` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6705:  value = getattr(object, key)
    6706:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6707:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6708:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6709:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6710:  tests/metagpt/planner/test_basic_planner.py::test_basic_planner
    6711:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/pydantic/main.py:952: PydanticDeprecatedSince20: The `__fields__` attribute is deprecated, use `model_fields` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6712:  warnings.warn('The `__fields__` attribute is deprecated, use `model_fields` instead.', DeprecationWarning)
    6713:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6714:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6715:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6716:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6717:  tests/metagpt/planner/test_basic_planner.py::test_basic_planner
    6718:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/inspect.py:351: PydanticDeprecatedSince20: The `__fields_set__` attribute is deprecated, use `model_fields_set` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6719:  value = getattr(object, key)
    6720:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6721:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6722:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6723:  tests/metagpt/planner/test_action_planner.py::test_action_planner
    6724:  tests/metagpt/planner/test_basic_planner.py::test_basic_planner
    6725:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/pydantic/main.py:961: PydanticDeprecatedSince20: The `__fields_set__` attribute is deprecated, use `model_fields_set` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    ...
    
    6765:  tests/metagpt/utils/test_session.py: 1 warning
    6766:  tests/metagpt/utils/test_text.py: 21 warnings
    6767:  tests/metagpt/utils/test_token_counter.py: 8 warnings
    6768:  tests/metagpt/utils/test_tree.py: 8 warnings
    6769:  tests/metagpt/utils/test_visual_graph_repo.py: 1 warning
    6770:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/httpx/_client.py:1417: DeprecationWarning: The 'proxies' argument is now deprecated. Use 'proxy' or 'mounts' instead.
    6771:  warnings.warn(message, DeprecationWarning)
    6772:  tests/metagpt/roles/di/test_data_interpreter.py: 24 warnings
    6773:  /home/runner/work/MetaGPT/MetaGPT/metagpt/strategy/planner.py:150: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6774:  tasks = [task.dict(exclude=task_exclude_field) for task in self.plan.tasks]
    6775:  tests/metagpt/roles/di/test_data_interpreter.py: 24 warnings
    6776:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/pydantic/main.py:979: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    6777:  warnings.warn('The `dict` method is deprecated; use `model_dump` instead.', DeprecationWarning)
    6778:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6779:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6780:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6781:  tests/metagpt/roles/di/test_data_interpreter.py::test_interpreter[False]
    6782:  /home/runner/work/MetaGPT/MetaGPT/metagpt/strategy/planner.py:152: PydanticDeprecatedSince20: The `json` method is deprecated; use `model_dump_json` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    ...
    
    6825:  self.variances_ = np.nanvar(X, axis=0)
    6826:  tests/metagpt/tools/libs/test_feature_engineering.py::test_variance_based_selection
    6827:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/sklearn/feature_selection/_variance_threshold.py:120: RuntimeWarning: All-NaN slice encountered
    6828:  self.variances_ = np.nanmin(compare_arr, axis=0)
    6829:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6830:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6831:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6832:  tests/metagpt/utils/test_common.py::TestGetProjectRoot::test_print_members[tests.metagpt.utils.test_common]
    6833:  /opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/pydantic/_internal/_model_construction.py:248: PydanticDeprecatedSince20: The `__fields__` attribute is deprecated, use `model_fields` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
    ...
    
    6853:  15.55s call     tests/metagpt/ext/stanford_town/plan/test_conversation.py::test_agent_conversation
    6854:  15.51s call     tests/metagpt/tools/test_web_browser_engine_selenium.py::test_scrape_web_page[firefox-normal]
    6855:  15.50s call     tests/metagpt/ext/stanford_town/roles/test_st_role.py::test_observe
    6856:  15.40s call     tests/metagpt/ext/stanford_town/memory/test_agent_memory.py::TestAgentMemory::test_retrieve_function
    6857:  15.17s call     tests/metagpt/serialize_deserialize/test_team.py::test_team_recover_multi_roles_save
    6858:  14.10s call     tests/metagpt/roles/test_architect.py::test_architect
    6859:  12.68s call     tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestActualRetrieve::test_retrieve_villager_experience_filtering
    6860:  =========================== short test summary info ============================
    6861:  FAILED tests/metagpt/actions/di/test_write_analysis_code.py::test_debug_with_reflection - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are an AI Python assistant. You will be given your previous implementation code of a task, runtime error results, and a hint to change the implementation appropriately. Write your full implementation.#SYSTEM_MSG_END#
    ...
    
    6866:  ```python
    6867:  def add(a: int, b: int) -> int:
    6868:  """
    6869:  Given integers a and b, return the total value of a and b.
    6870:  """
    6871:  return a - b
    6872:  ```
    6873:  user:
    6874:  Tests failed:
    6875:  assert add(1, 2) == 3 # output: -1
    6876:  assert add(1, 3) == 4 # output: -2
    6877:  [reflection on previous impl]:
    6878:  The implementation failed the test cases where the input integers are 1 and 2. The issue arises because the code does not add the two integers together, but instead subtracts the second integer from the first. To fix this issue, we should change the operator from `-` to `+` in the return statement. This will ensure that the function returns the correct output for the given input.
    6879:  [improved impl]:
    6880:  def add(a: int, b: int) -> int:
    6881:  """
    6882:  Given integers a and b, return the total value of a and b.
    6883:  """
    6884:  return a + b
    6885:  [/example]
    6886:  [context]
    6887:  [{'role': 'user', 'content': "\n# User Requirement\nread a dataset test.csv and print its head\n\n# Plan Status\n\n    ## Finished Tasks\n    ### code\n    ```python\n    ```\n\n    ### execution result\n\n    ## Current Task\n    import pandas and load the dataset from 'test.csv'.\n\n    ## Task Guidance\n    Write complete code for 'Current Task'. And avoid duplicating code from 'Finished Tasks', such as repeated import of packages, reading data, etc.\n    Specifically, \n    \n\n# Tool Info\n\n\n# Constraints\n- Take on Current Task if it is in Plan Status, otherwise, tackle User Requirement directly.\n- Ensure the output new code is executable in the same Jupyter notebook as the previous executed code.\n- Always prioritize using pre-defined tools for the same functionality.\n\n# Output\nWhile some concise thoughts are helpful, code is absolutely required. Always output one and only one code block in your response. Output code in the following format:\n```python\nyour code\n```\n"}, {'role': 'assistant', 'content': "import pandas as pd\ndata = pd.read_excel('test.csv')\ndata"}, {'role': 'user', 'content': '\n    Traceback (most recent call last):\n        File "<stdin>", line 2, in <module>\n        File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 478, in read_excel\n            io = ExcelFile(io, storage_options=storage_options, engine=engine)\n        File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1500, in __init__\n            raise ValueError(\n        ValueError: Excel file format cannot be determined, you must specify an engine manually.\n    '}]
    ...
    
    6889:  [assistant: import pandas as pd
    6890:  data = pd.read_excel('test.csv')
    6891:  data, user: 
    6892:  Traceback (most recent call last):
    6893:  File "<stdin>", line 2, in <module>
    6894:  File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 478, in read_excel
    6895:  io = ExcelFile(io, storage_options=storage_options, engine=engine)
    6896:  File "/Users/gary/miniconda3/envs/py39_scratch/lib/python3.9/site-packages/pandas/io/excel/_base.py", line 1500, in __init__
    6897:  raise ValueError(
    6898:  ValueError: Excel file format cannot be determined, you must specify an engine manually.
    6899:  ]
    6900:  [instruction]
    6901:  Analyze your previous code and error in [context] step by step, provide me with improved method and code. Remember to follow [context] requirement. Don't forget to write code for steps behind the error step.
    6902:  Output a json following the format:
    6903:  ```json
    6904:  {
    6905:  "reflection": str = "Reflection on previous implementation",
    6906:  "improved_impl": str = "Refined code after reflection.",
    6907:  }
    6908:  ```
    6909:  FAILED tests/metagpt/actions/test_write_code.py::test_write_code_deps - tenacity.RetryError: RetryError[<Future at 0x7fa510874310 state=finished raised ValueError>]
    6910:  FAILED tests/metagpt/actions/test_write_code.py::test_get_codes - AssertionError: assert ''
    6911:  FAILED tests/metagpt/actions/test_write_prd.py::test_write_prd_inc - tenacity.RetryError: RetryError[<Future at 0x7fa510586550 state=finished raised ValueError>]
    6912:  FAILED tests/metagpt/environment/werewolf_env/test_werewolf_ext_env.py::test_werewolf_ext_env - AssertionError: assert None == 'Player4'
    6913:  +  where None = WerewolfExtEnv(action_space=<gymnasium.spaces.space.Space object at 0x7fa510cc66a0>, observation_space=<gymnasium.spac... player_hunted=None, player_protected=None, is_hunted_player_saved=False, player_poisoned=None, player_current_dead=[]).player_hunted
    6914:  FAILED tests/metagpt/ext/stanford_town/memory/test_agent_memory.py::TestAgentMemory::test_retrieve_function - ValueError: get_embedding failed
    6915:  FAILED tests/metagpt/ext/stanford_town/plan/test_conversation.py::test_agent_conversation - ValueError: get_embedding failed
    6916:  FAILED tests/metagpt/ext/stanford_town/plan/test_st_plan.py::test_should_react - ValueError: get_embedding failed
    6917:  FAILED tests/metagpt/ext/stanford_town/roles/test_st_role.py::test_observe - ValueError: get_embedding failed
    6918:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestExperiencesOperation::test_add - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6919:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestExperiencesOperation::test_retrieve - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6920:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestExperiencesOperation::test_retrieve_filtering - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6921:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestActualRetrieve::test_retrieve_villager_experience - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6922:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestActualRetrieve::test_retrieve_villager_experience_filtering - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6923:  FAILED tests/metagpt/memory/test_brain_memory.py::test_memory_llm[llm0] - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    6924:  moon
    6925:  ---
    6926:  ## Paragraph 2
    6927:  apple
    6928:  FAILED tests/metagpt/planner/test_action_planner.py::test_action_planner - semantic_kernel.planning.planning_exception.PlanningException: (<ErrorCodes.InvalidPlan: 1>, 'Encountered an error while parsing Plan JSON.', None)
    6929:  FAILED tests/metagpt/planner/test_basic_planner.py::test_basic_planner - assert 'WriterSkill.Brainstorm' in ''
    6930:  +  where '' = SKContext(memory=NullMemory(), variables=ContextVariables(variables={'input': '', 'goal': "\n        Tomorrow is Valen...TION_PARAM_NAME_REGEX='^[0-9A-Za-z_]*$', FUNCTION_NAME_REGEX='^[0-9A-Za-z_]*$', SKILL_NAME_REGEX='^[0-9A-Za-z_]*$')}})).result
    6931:  +    where SKContext(memory=NullMemory(), variables=ContextVariables(variables={'input': '', 'goal': "\n        Tomorrow is Valen...TION_PARAM_NAME_REGEX='^[0-9A-Za-z_]*$', FUNCTION_NAME_REGEX='^[0-9A-Za-z_]*$', SKILL_NAME_REGEX='^[0-9A-Za-z_]*$')}})) = Prompt: \nYou are a planner for the Semantic Kernel.\nYour job is to create a properly formatted JSON plan step by step,.../platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}")).generated_plan
    6932:  +      where Prompt: \nYou are a planner for the Semantic Kernel.\nYour job is to create a properly formatted JSON plan step by step,.../platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}")) = SkAgent(private_context=None, private_config=None, private_llm=<metagpt.provider.openai_api.OpenAILLM object at 0x7fa5...810e580>>, import_skill=<bound method Kernel.import_skill of <semantic_kernel.kernel.Kernel object at 0x7fa4c810e580>>).plan
    6933:  FAILED tests/metagpt/provider/test_openai.py::test_text_to_speech - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6934:  FAILED tests/metagpt/provider/test_openai.py::test_speech_to_text - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    6935:  FAILED tests/metagpt/provider/test_openai.py::test_gen_image - openai.AuthenticationError: Error code: 401 - {'error': {'code': 'invalid_api_key', 'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'param': None, 'type': 'invalid_request_error'}}
    6936:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func0-LLMType.OPENAI] - TypeError: 'staticmethod' object is not callable
    6937:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func1-LLMType.AZURE] - TypeError: 'staticmethod' object is not callable
    6938:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func2-EmbeddingType.OPENAI] - TypeError: 'staticmethod' object is not callable
    6939:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func3-EmbeddingType.AZURE] - TypeError: 'staticmethod' object is not callable
    6940:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func4-EmbeddingType.GEMINI] - TypeError: 'staticmethod' object is not callable
    6941:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func5-EmbeddingType.OLLAMA] - TypeError: 'staticmethod' object is not callable
    6942:  FAILED tests/metagpt/roles/test_assistant.py::test_run - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    6943:  who is tulin
    6944:  The one who eaten a poison apple.
    6945:  ---
    6946:  ## Paragraph 2
    6947:  Do you have a poison apple?
    6948:  FAILED tests/metagpt/roles/test_assistant.py::test_memory[memory0] - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    6949:  can you draw me an picture?
    6950:  Yes, of course. What do you want me to draw
    6951:  draw apple
    6952:  ---
    6953:  ## Paragraph 2
    6954:  Draw me an apple.
    6955:  FAILED tests/metagpt/roles/test_engineer.py::test_engineer - Exception: Traceback (most recent call last):
    ...
    
    6957:  result = await fn(*args, **kwargs)
    6958:  File "/home/runner/work/MetaGPT/MetaGPT/metagpt/actions/write_code.py", line 89, in write_code
    6959:  code_rsp = await self._aask(prompt)
    6960:  File "/home/runner/work/MetaGPT/MetaGPT/metagpt/actions/action.py", line 93, in _aask
    6961:  return await self.llm.aask(prompt, system_msgs)
    6962:  File "/home/runner/work/MetaGPT/MetaGPT/tests/mock/mock_llm.py", line 98, in aask
    6963:  rsp = await self._mock_rsp(msg_key, self.original_aask, msg, system_msgs, format_msgs, images, timeout, stream)
    6964:  File "/home/runner/work/MetaGPT/MetaGPT/tests/mock/mock_llm.py", line 114, in _mock_rsp
    6965:  raise ValueError(
    6966:  ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: 
    ...
    
    7112:  File "/home/runner/work/MetaGPT/MetaGPT/metagpt/actions/write_code.py", line 142, in run
    7113:  code = await self.write_code(prompt)
    7114:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    7115:  return await fn(*args, **kwargs)
    7116:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in __call__
    7117:  do = self.iter(retry_state=retry_state)
    7118:  File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in iter
    7119:  raise retry_exc from fut.exception()
    7120:  tenacity.RetryError: RetryError[<Future at 0x7fa4c80dd8e0 state=finished raised ValueError>]
    7121:  FAILED tests/metagpt/test_environment.py::test_add_role - AssertionError: assert None == ProductManager(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_na...t'>, {}), ignore_id=False), auto_run=False), recovered=False, latest_observed_msg=None, todo_action='PrepareDocuments')
    7122:  +  where None = <bound method Environment.get_role of Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>...125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}})))>('Alice(product manager)')
    7123:  +    where <bound method Environment.get_role of Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>...125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}})))> = Environment(action_space=<gymnasium.spaces.space.Space object at 0x7fa508ed0760>, observation_space=<gymnasium.spaces....0125': {'prompt': 0.0005, 'completion': 0.0015}, 'openai/gpt-4-turbo-preview': {'prompt': 0.01, 'completion': 0.03}}))).get_role
    7124:  +    and   'Alice(product manager)' = str('Alice(product manager)')
    7125:  +      where 'Alice(product manager)' = ProductManager(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_na...t'>, {}), ignore_id=False), auto_run=False), recovered=False, latest_observed_msg=None, todo_action='PrepareDocuments')._setting
    7126:  FAILED tests/metagpt/test_llm.py::test_llm_acompletion - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7127:  FAILED tests/metagpt/tools/test_moderation.py::test_amoderation[content0] - AttributeError: 'NoneType' object has no attribute 'results'
    7128:  FAILED tests/metagpt/tools/test_search_engine.py::test_search_engine[SearchEngineType.BING-None-6-False] - pydantic_core._pydantic_core.ValidationError: 1 validation error for SearchEngine
    7129:  api_key
    7130:  Field required [type=missing, input_value={}, input_type=dict]
    7131:  For further information visit https://errors.pydantic.dev/2.5/v/missing
    7132:  FAILED tests/metagpt/utils/test_tree.py::test_tree_command[/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../..-/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../../../.gitignore] - subprocess.CalledProcessError: Command '['tree', '--gitfile', '/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../../../.gitignore', '/home/runner/work/MetaGPT/MetaGPT/tests']' returned non-zero exit status 1.
    7133:  ===== 34 failed, 600 passed, 13 skipped, 304 warnings in 811.04s (0:13:31) =====
    ...
    
    7148:  metagpt/_compat.py                                                   15     11    27%   6-23
    7149:  metagpt/actions/__init__.py                                          40      0   100%
    7150:  metagpt/actions/action.py                                            67      1    99%   42
    7151:  metagpt/actions/action_graph.py                                      28      0   100%
    7152:  metagpt/actions/action_node.py                                      349     18    95%   214, 306, 326, 338, 429, 510, 518-522, 541, 559, 573, 578, 605-610, 627-628, 662
    7153:  metagpt/actions/action_outcls_registry.py                            17      0   100%
    7154:  metagpt/actions/action_output.py                                      8      0   100%
    7155:  metagpt/actions/add_requirement.py                                    3      0   100%
    7156:  metagpt/actions/debug_error.py                                       30      4    87%   54, 59, 66, 69
    ...
    
    7484:  metagpt/utils/stream_pipe.py                                         17     17     0%   8-40
    7485:  metagpt/utils/text.py                                                55      1    98%   31
    7486:  metagpt/utils/token_counter.py                                       65     23    65%   231-232, 234-235, 237-238, 244-245, 259-261, 282-284, 298-300, 305-312
    7487:  metagpt/utils/tree.py                                                56      4    93%   101-102, 129, 137
    7488:  metagpt/utils/visual_graph_repo.py                                   91      0   100%
    7489:  metagpt/utils/yaml_model.py                                          26      5    81%   24, 31, 35-36, 47
    7490:  -----------------------------------------------------------------------------------------------
    7491:  TOTAL                                                             18714   5505    71%
    7492:  ##[group]Run grep -E "FAILED tests|ERROR tests|[0-9]+ passed," unittest.txt
    7493:  �[36;1mgrep -E "FAILED tests|ERROR tests|[0-9]+ passed," unittest.txt�[0m
    7494:  �[36;1mfailed_count=$(grep -E "FAILED|ERROR" unittest.txt | wc -l)�[0m
    7495:  �[36;1mif [[ "$failed_count" -gt 0 ]]; then�[0m
    7496:  �[36;1m  echo "$failed_count failed lines found! Task failed."�[0m
    ...
    
    7500:  env:
    7501:  pythonLocation: /opt/hostedtoolcache/Python/3.9.19/x64
    7502:  PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.9.19/x64/lib/pkgconfig
    7503:  Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.9.19/x64
    7504:  Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.9.19/x64
    7505:  Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.9.19/x64
    7506:  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.9.19/x64/lib
    7507:  ##[endgroup]
    7508:  FAILED tests/metagpt/actions/di/test_write_analysis_code.py::test_debug_with_reflection - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are an AI Python assistant. You will be given your previous implementation code of a task, runtime error results, and a hint to change the implementation appropriately. Write your full implementation.#SYSTEM_MSG_END#
    7509:  FAILED tests/metagpt/actions/test_write_code.py::test_write_code_deps - tenacity.RetryError: RetryError[<Future at 0x7fa510874310 state=finished raised ValueError>]
    7510:  FAILED tests/metagpt/actions/test_write_code.py::test_get_codes - AssertionError: assert ''
    7511:  FAILED tests/metagpt/actions/test_write_prd.py::test_write_prd_inc - tenacity.RetryError: RetryError[<Future at 0x7fa510586550 state=finished raised ValueError>]
    7512:  FAILED tests/metagpt/environment/werewolf_env/test_werewolf_ext_env.py::test_werewolf_ext_env - AssertionError: assert None == 'Player4'
    7513:  FAILED tests/metagpt/ext/stanford_town/memory/test_agent_memory.py::TestAgentMemory::test_retrieve_function - ValueError: get_embedding failed
    7514:  FAILED tests/metagpt/ext/stanford_town/plan/test_conversation.py::test_agent_conversation - ValueError: get_embedding failed
    7515:  FAILED tests/metagpt/ext/stanford_town/plan/test_st_plan.py::test_should_react - ValueError: get_embedding failed
    7516:  FAILED tests/metagpt/ext/stanford_town/roles/test_st_role.py::test_observe - ValueError: get_embedding failed
    7517:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestExperiencesOperation::test_add - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7518:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestExperiencesOperation::test_retrieve - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7519:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestExperiencesOperation::test_retrieve_filtering - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7520:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestActualRetrieve::test_retrieve_villager_experience - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7521:  FAILED tests/metagpt/ext/werewolf/actions/test_experience_operation.py::TestActualRetrieve::test_retrieve_villager_experience_filtering - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7522:  FAILED tests/metagpt/memory/test_brain_memory.py::test_memory_llm[llm0] - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    7523:  FAILED tests/metagpt/planner/test_action_planner.py::test_action_planner - semantic_kernel.planning.planning_exception.PlanningException: (<ErrorCodes.InvalidPlan: 1>, 'Encountered an error while parsing Plan JSON.', None)
    7524:  FAILED tests/metagpt/planner/test_basic_planner.py::test_basic_planner - assert 'WriterSkill.Brainstorm' in ''
    7525:  FAILED tests/metagpt/provider/test_openai.py::test_text_to_speech - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7526:  FAILED tests/metagpt/provider/test_openai.py::test_speech_to_text - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7527:  FAILED tests/metagpt/provider/test_openai.py::test_gen_image - openai.AuthenticationError: Error code: 401 - {'error': {'code': 'invalid_api_key', 'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'param': None, 'type': 'invalid_request_error'}}
    7528:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func0-LLMType.OPENAI] - TypeError: 'staticmethod' object is not callable
    7529:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func1-LLMType.AZURE] - TypeError: 'staticmethod' object is not callable
    7530:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func2-EmbeddingType.OPENAI] - TypeError: 'staticmethod' object is not callable
    7531:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func3-EmbeddingType.AZURE] - TypeError: 'staticmethod' object is not callable
    7532:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func4-EmbeddingType.GEMINI] - TypeError: 'staticmethod' object is not callable
    7533:  FAILED tests/metagpt/rag/factories/test_embedding.py::TestRAGEmbeddingFactory::test_get_rag_embedding[mock_func5-EmbeddingType.OLLAMA] - TypeError: 'staticmethod' object is not callable
    7534:  FAILED tests/metagpt/roles/test_assistant.py::test_run - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    7535:  FAILED tests/metagpt/roles/test_assistant.py::test_memory[memory0] - ValueError: In current test setting, api call is not allowed, you should properly mock your tests, or add expected api response in tests/data/rsp_cache.json. The prompt you want for api call: You are a tool capable of determining whether two paragraphs are semantically related.Return "TRUE" if "Paragraph 1" is semantically relevant to "Paragraph 2", otherwise return "FALSE".#SYSTEM_MSG_END### Paragraph 1
    7536:  FAILED tests/metagpt/roles/test_engineer.py::test_engineer - Exception: Traceback (most recent call last):
    7537:  FAILED tests/metagpt/test_environment.py::test_add_role - AssertionError: assert None == ProductManager(private_context=Context(kwargs=AttrDict(), config=Config(extra_fields=None, project_path='', project_na...t'>, {}), ignore_id=False), auto_run=False), recovered=False, latest_observed_msg=None, todo_action='PrepareDocuments')
    7538:  FAILED tests/metagpt/test_llm.py::test_llm_acompletion - openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-xxx. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
    7539:  FAILED tests/metagpt/tools/test_moderation.py::test_amoderation[content0] - AttributeError: 'NoneType' object has no attribute 'results'
    7540:  FAILED tests/metagpt/tools/test_search_engine.py::test_search_engine[SearchEngineType.BING-None-6-False] - pydantic_core._pydantic_core.ValidationError: 1 validation error for SearchEngine
    7541:  FAILED tests/metagpt/utils/test_tree.py::test_tree_command[/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../..-/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../../../.gitignore] - subprocess.CalledProcessError: Command '['tree', '--gitfile', '/home/runner/work/MetaGPT/MetaGPT/tests/metagpt/utils/../../../.gitignore', '/home/runner/work/MetaGPT/MetaGPT/tests']' returned non-zero exit status 1.
    7542:  ===== 34 failed, 600 passed, 13 skipped, 304 warnings in 811.04s (0:13:31) =====
    7543:  46 failed lines found! Task failed.
    7544:  ##[error]Process completed with exit code 1.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    @codecov-commenter
    Copy link

    codecov-commenter commented Apr 22, 2024

    Codecov Report

    All modified and coverable lines are covered by tests ✅

    Project coverage is 70.58%. Comparing base (7e285fd) to head (9a7c195).

    ❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

    Additional details and impacted files
    @@           Coverage Diff           @@
    ##             main    #1217   +/-   ##
    =======================================
      Coverage   70.57%   70.58%           
    =======================================
      Files         314      314           
      Lines       18714    18714           
    =======================================
    + Hits        13208    13209    +1     
    + Misses       5506     5505    -1     

    ☔ View full report in Codecov by Sentry.
    📢 Have feedback on the report? Share it here.

    @geekan geekan merged commit 7bfe2b5 into main Apr 22, 2024
    2 of 4 checks passed
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    None yet

    2 participants