Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: push the limit for open handles #7390

Merged

Conversation

ntindle
Copy link
Member

@ntindle ntindle commented Jul 12, 2024

User description

Background

Changes 🏗️

PR Quality Scorecard ✨

  • Have you used the PR description template?   +2 pts
  • Is your pull request atomic, focusing on a single change?   +5 pts
  • Have you linked the GitHub issue(s) that this PR addresses?   +5 pts
  • Have you documented your changes clearly and comprehensively?   +5 pts
  • Have you changed or added a feature?   -4 pts
    • Have you added/updated corresponding documentation?   +4 pts
    • Have you added/updated corresponding integration tests?   +5 pts
  • Have you changed the behavior of AutoGPT?   -5 pts
    • Have you also run agbenchmark to verify that these changes do not regress performance?   +10 pts

PR Type

enhancement, configuration changes


Description

  • Added a new step in the GitHub Actions workflow to set the maximum open files limit on macOS runners.
  • This change uses ulimit -n unlimited to remove the limit on the number of open files, which helps in preventing issues related to open file handle limits during CI runs.

Changes walkthrough 📝

Relevant files
Configuration changes
autogpt-server-ci.yml
Set max open files limit in CI workflow for macOS               

.github/workflows/autogpt-server-ci.yml

  • Added a step to set the maximum open files limit on macOS runners.
  • Used ulimit -n unlimited to remove the limit on the number of open
    files.
  • +7/-0     

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @ntindle ntindle requested a review from a team as a code owner July 12, 2024 12:14
    Copy link

    PR Description updated to latest commit (85c79f9)

    Copy link

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Key issues to review

    Configuration Validity
    The command ulimit -n unlimited may not behave as expected on macOS runners due to system limitations and security restrictions. It's important to verify if this command effectively changes the limit in the CI environment.

    @ntindle ntindle changed the base branch from kpczerwinski/open-1186-agent-wrapper to master July 12, 2024 12:14
    @ntindle ntindle requested a review from a team as a code owner July 12, 2024 12:14
    @ntindle ntindle requested review from kcze and majdyz and removed request for a team July 12, 2024 12:14
    Copy link

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Possible issue
    Replace 'unlimited' file descriptor limit with a high specific value to avoid potential system issues

    Replace the ulimit -n unlimited command with a specific high value, such as ulimit
    -n 4096. Setting the open files limit to 'unlimited' can lead to system instability
    or security issues as it allows processes to consume an unlimited number of file
    descriptors.

    .github/workflows/autogpt-server-ci.yml [68]

     - name: Set max open files limit
       if: runner.os == 'macOS'
       working-directory: ${{ runner.temp }}
       run: |
         ulimit -a
    -    ulimit -n unlimited
    +    ulimit -n 4096
     
    • Apply this suggestion
    Suggestion importance[1-10]: 9

    Why: The suggestion addresses a potential system stability and security issue by replacing 'unlimited' with a high specific value, which is a safer and more controlled approach.

    9

    Copy link

    netlify bot commented Jul 12, 2024

    Deploy Preview for auto-gpt-docs canceled.

    Name Link
    🔨 Latest commit 5d80d7a
    🔍 Latest deploy log https://app.netlify.com/sites/auto-gpt-docs/deploys/66912d11b2a58d000853e536

    Copy link

    codecov bot commented Jul 12, 2024

    Codecov Report

    All modified and coverable lines are covered by tests ✅

    Project coverage is 54.20%. Comparing base (e4cde88) to head (5d80d7a).

    Additional details and impacted files
    @@                          Coverage Diff                          @@
    ##           kpczerwinski/open-1186-agent-wrapper    #7390   +/-   ##
    =====================================================================
      Coverage                                 54.20%   54.20%           
    =====================================================================
      Files                                       122      122           
      Lines                                      6876     6876           
      Branches                                    881      881           
    =====================================================================
      Hits                                       3727     3727           
      Misses                                     3016     3016           
      Partials                                    133      133           
    Flag Coverage Δ
    Linux 53.95% <ø> (ø)
    Windows 50.73% <ø> (ø)
    autogpt-agent 33.99% <ø> (ø)
    forge 58.63% <ø> (ø)
    macOS 53.25% <ø> (ø)

    Flags with carried forward coverage won't be shown. Click here to find out more.

    ☔ View full report in Codecov by Sentry.
    📢 Have feedback on the report? Share it here.

    Copy link

    codiumai-pr-agent-pro bot commented Jul 12, 2024

    CI Failure Feedback 🧐

    (Checks updated until commit 5d80d7a)

    Action: test (3.10, windows)

    Failed stage: Run pytest with coverage [❌]

    Failed test name: test_available_blocks

    Failure summary:

    The action failed because the test test_available_blocks in the file test/block/test_block.py
    encountered a ValueError.

  • The error occurred due to an attempt to unpack a value that did not have the expected number of
    elements.
  • Specifically, the code expected to unpack two values, but only one value was provided.

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Microsoft Windows Server 2022
    ...
    
    563:  ============================= test session starts =============================
    564:  platform win32 -- Python 3.10.11, pytest-8.2.2, pluggy-1.5.0 -- C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\autogpt-server-Hzddpd5Y-py3.10\Scripts\python.exe
    565:  cachedir: .pytest_cache
    566:  rootdir: D:\a\AutoGPT\AutoGPT\rnd\autogpt_server
    567:  configfile: pyproject.toml
    568:  plugins: anyio-4.4.0, asyncio-0.23.7
    569:  asyncio: mode=auto
    570:  collecting ... collected 18 items
    571:  test/block/test_block.py::test_available_blocks FAILED                   [  5%]
    ...
    
    581:  test/server/test_ws_api.py::test_websocket_router_subscribe PASSED       [ 61%]
    582:  test/server/test_ws_api.py::test_websocket_router_unsubscribe PASSED     [ 66%]
    583:  test/server/test_ws_api.py::test_websocket_router_invalid_method PASSED  [ 72%]
    584:  test/server/test_ws_api.py::test_handle_subscribe_success PASSED         [ 77%]
    585:  test/server/test_ws_api.py::test_handle_subscribe_missing_data PASSED    [ 83%]
    586:  test/server/test_ws_api.py::test_handle_unsubscribe_success PASSED       [ 88%]
    587:  test/server/test_ws_api.py::test_handle_unsubscribe_missing_data PASSED  [ 94%]
    588:  test/util/test_service.py::test_service_creation PASSED                  [100%]
    589:  ================================== FAILURES ===================================
    ...
    
    608:  prefix = " " * 4 + prefix
    609:  for mock_name, mock_obj in (block.test_mock or {}).items():
    610:  log(f"{prefix} mocking {mock_name}...")
    611:  setattr(block, mock_name, mock_obj)
    612:  for input_data in block.test_input:
    613:  log(f"{prefix} in: {input_data}")
    614:  for output_name, output_data in block.execute(input_data):
    615:  if output_index >= len(block.test_output):
    616:  raise ValueError(f"{prefix} produced output more than expected")
    617:  >               ex_output_name, ex_output_data = block.test_output[output_index]
    618:  E               ValueError: not enough values to unpack (expected 2, got 1)
    619:  test\block\test_block.py:34: ValueError
    620:  ---------------------------- Captured stdout call -----------------------------
    621:  [Test-ParrotBlock] Executing 1 tests...\n    [Test-ParrotBlock] in: {'input': 'Hello, World!'}\n    [Test-ParrotBlock] \u2705 comparing `output` vs `output`\n    [Test-ParrotBlock] \u2705 comparing `Hello, World!` vs `Hello, World!`\n[Test-PrintingBlock] Executing 1 tests...\n    [Test-PrintingBlock] in: {'text': 'Hello, World!'}\n>>>>> Print:  Hello, World!\n    [Test-PrintingBlock] \u2705 comparing `status` vs `status`\n    [Test-PrintingBlock] \u2705 comparing `printed` vs `printed`\n[Test-RedditGetPostsBlock] Executing 1 tests...\n    [Test-RedditGetPostsBlock] mocking get_posts...\n    [Test-RedditGetPostsBlock] in: {'creds': {'client_id': 'client_id', 'client_secret': 'client_secret', 'username': 'username', 'password': 'password', 'user_agent': 'user_agent'}, 'subreddit': 'subreddit', 'last_post': 'id3', 'post_limit': 2}\n    [Test-RedditGetPostsBlock] \u2705 comparing `post` vs `post`\n    [Test-RedditGetPostsBlock] \u2705 comparing `id='id1' subreddit='subreddit' title='title1' body='body1'` vs `id='id1' subreddit='subreddit' title='title1' body='body1'`\n    [Test-RedditGetPostsBlock] \u2705 comparing `post` vs `post`\n    [Test-RedditGetPostsBlock] \u2705 comparing `id='id2' subreddit='subreddit' title='title2' body='body2'` vs `id='id2' subreddit='subreddit' title='title2' body='body2'`\n[Test-RedditPostCommentBlock] No test data provided\n[Test-TextMatcherBlock] Executing 4 tests...\n    [Test-TextMatcherBlock] in: {'text': 'ABC', 'match': 'ab', 'data': 'X', 'case_sensitive': False}\n    [Test-TextMatcherBlock] \u2705 comparing `positive` vs `positive`\n    [Test-TextMatcherBlock] \u2705 comparing `X` vs `X`\n    [Test-TextMatcherBlock] in: {'text': 'ABC', 'match': 'ab', 'data': 'Y', 'case_sensitive': True}\n    [Test-TextMatcherBlock] \u2705 comparing `negative` vs `negative`\n    [Test-TextMatcherBlock] \u2705 comparing `Y` vs `Y`\n    [Test-TextMatcherBlock] in: {'text': 'Hello World!', 'match': '.orld.+', 'data': 'Z'}\n    [Test-TextMatcherBlock] \u2705 comparing `positive` vs `positive`\n    [Test-TextMatcherBlock] \u2705 comparing `Z` vs `Z`\n    [Test-TextMatcherBlock] in: {'text': 'Hello World!', 'match': 'World![a-z]+', 'data': 'Z'}\n    [Test-TextMatcherBlock] \u2705 comparing `negative` vs `negative`\n    [Test-TextMatcherBlock] \u2705 comparing `Z` vs `Z`\n[Test-TextFormatterBlock] Executing 3 tests...\n    [Test-TextFormatterBlock] in: {'texts': ['Hello'], 'format': '{texts[0]}'}\n    [Test-TextFormatterBlock] \u2705 comparing `output` vs `output`\n    [Test-TextFormatterBlock] \u2705 comparing `Hello` vs `Hello`\n    [Test-TextFormatterBlock] in: {'texts': ['Hello', 'World!'], 'named_texts': {'name': 'Alice'}, 'format': '{texts[0]} {texts[1]} {name}'}\n    [Test-TextFormatterBlock] \u2705 comparing `output` vs `output`\n    [Test-TextFormatterBlock] \u2705 comparing `Hello World! Alice` vs `Hello World! Alice`\n    [Test-TextFormatterBlock] in: {'format': 'Hello, World!'}\n    [Test-TextFormatterBlock] \u2705 comparing `output` vs `output`\n    [Test-TextFormatterBlock] \u2705 comparing `Hello, World!` vs `Hello, World!`\n[Test-LlmCallBlock] Executing 1 tests...\n    [Test-LlmCallBlock] mocking llm_call...\n    [Test-LlmCallBlock] in: {'model': 'gpt-4-turbo', 'api_key': 'fake-api', 'expected_format': {'key1': 'value1', 'key2': 'value2'}, 'prompt': 'User prompt'}\n    [Test-LlmCallBlock] \u2705 comparing `response` vs `response`\n    [Test-LlmCallBlock] \u2705 comparing `{'key1': 'key1Value', 'key2': 'key2Value'}` vs `{'key1': 'key1Value', 'key2': 'key2Value'}`\n[Test-GetWikipediaSummary] Executing 1 tests...\n    [Test-GetWikipediaSummary] in: {'topic': 'Artificial Intelligence'}
    622:  ------------------------------ Captured log call ------------------------------
    623:  WARNING  autogpt_server.blocks.ai:ai.py:95 LLM request: [{'role': 'system', 'content': 'Reply in json format:\n{\n  "key1": "value1",\n"key2": "value2"\n}'}, {'role': 'user', 'content': 'User prompt'}]
    624:  WARNING  autogpt_server.blocks.ai:ai.py:104 LLM attempt-0 response: {"key1": "key1Value", "key2": "key2Value"}
    625:  ============================== warnings summary ===============================
    626:  C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\autogpt-server-Hzddpd5Y-py3.10\lib\site-packages\pydantic\_internal\_generate_schema.py:273: 22 warnings
    627:  C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\autogpt-server-Hzddpd5Y-py3.10\lib\site-packages\pydantic\_internal\_generate_schema.py:273: PydanticDeprecatedSince20: `json_encoders` is deprecated. See https://docs.pydantic.dev/2.8/concepts/serialization/#custom-serializers for alternatives. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
    628:  warnings.warn(
    629:  -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
    630:  =========================== short test summary info ===========================
    631:  FAILED test/block/test_block.py::test_available_blocks - ValueError: not enough values to unpack (expected 2, got 1)
    632:  ================= 1 failed, 17 passed, 22 warnings in 47.59s ==================
    633:  ##[error]Process completed with exit code 1.
    

    ✨ CI feedback usage guide:

    The CI feedback tool (/checks) automatically triggers when a PR has a failed check.
    The tool analyzes the failed checks and provides several feedbacks:

    • Failed stage
    • Failed test name
    • Failure summary
    • Relevant error logs

    In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

    /checks "https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}"
    

    where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

    Configuration options

    • enable_auto_checks_feedback - if set to true, the tool will automatically provide feedback when a check is failed. Default is true.
    • excluded_checks_list - a list of checks to exclude from the feedback, for example: ["check1", "check2"]. Default is an empty list.
    • enable_help_text - if set to true, the tool will provide a help message with the feedback. Default is true.
    • persistent_comment - if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.
    • final_update_message - if persistent_comment is true and updating a previous checks message, the tool will also create a new message: "Persistent checks updated to latest commit". Default is true.

    See more information about the checks tool in the docs.

    @ntindle ntindle changed the base branch from master to kpczerwinski/open-1186-agent-wrapper July 12, 2024 13:18
    @github-actions github-actions bot added size/s and removed size/xl labels Jul 12, 2024
    @kcze kcze merged commit 5ec2856 into kpczerwinski/open-1186-agent-wrapper Jul 12, 2024
    29 of 33 checks passed
    @kcze kcze deleted the ntindle/fix-ci-for-agent-wrapper branch July 12, 2024 13:27
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    Status: Done
    Development

    Successfully merging this pull request may close these issues.

    None yet

    2 participants