Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Link all challenges to benchmark python hook #4786

Conversation

waynehamadi
Copy link
Contributor

Background

Currently there is only one challenge linked to the benchmarks.py file. We need all the challenges linked to this benchmarks.py

Changes

  • Changed the test to use the run_task method

Documentation

Test Plan

PR Quality Checklist

  • My pull request is atomic and focuses on a single change.
  • I have thoroughly tested my changes with multiple different prompts.
  • I have considered potential risks and mitigations for my changes.
  • I have documented my changes clearly and comprehensively.
  • I have not snuck in any "extra" small tweaks changes.
  • I have run the following commands against my code to ensure it passes our linters:
    black .
    isort .
    mypy
    autoflake --remove-all-unused-imports --recursive --ignore-init-module-imports --ignore-pass-after-docstring autogpt tests --in-place

@netlify
Copy link

netlify bot commented Jun 24, 2023

Deploy Preview for auto-gpt-docs canceled.

Name Link
🔨 Latest commit 682ec76
🔍 Latest deploy log https://app.netlify.com/sites/auto-gpt-docs/deploys/6496ea7c5df8d70008f0c734

@github-actions
Copy link

This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.

@github-actions
Copy link

This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.

@Auto-GPT-Bot
Copy link
Contributor

You changed AutoGPT's behaviour. The cassettes have been updated and will be merged to the submodule when this Pull Request gets merged.

@codecov
Copy link

codecov bot commented Jun 24, 2023

Codecov Report

Patch coverage has no change and project coverage change: +0.06 🎉

Comparison is base (307f6e5) 70.71% compared to head (682ec76) 70.77%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4786      +/-   ##
==========================================
+ Coverage   70.71%   70.77%   +0.06%     
==========================================
  Files          68       68              
  Lines        3326     3326              
  Branches      532      532              
==========================================
+ Hits         2352     2354       +2     
+ Misses        796      795       -1     
+ Partials      178      177       -1     

see 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

benchmarks.py Show resolved Hide resolved
@github-actions
Copy link

This PR exceeds the recommended size of 500 lines. Please make sure you are NOT addressing multiple issues with one PR.

@waynehamadi waynehamadi merged commit cfdb24e into Significant-Gravitas:master Jun 24, 2023
16 checks passed
patched_api_requestor: None,
monkeypatch: pytest.MonkeyPatch,
level_to_run: int,
challenge_name: str,
workspace: Workspace,
patched_make_workspace: pytest.fixture,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a valid type, and definitely not the return type of that fixture. It contains an empty yield statement, so the correct type here would be None.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you enabled type checking in your IDE? Might help to prevent this kind of errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

4 participants