Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More async eval tests #266

Closed
wants to merge 1 commit into from
Closed

More async eval tests #266

wants to merge 1 commit into from

Conversation

Anmol6
Copy link
Contributor

@Anmol6 Anmol6 commented Dec 9, 2023

Summary by CodeRabbit

  • New Features

    • Introduced an asynchronous version of the AI query function for improved performance.
  • Tests

    • Added new async test cases for user detail extraction to ensure reliability and efficiency.

Copy link
Contributor

coderabbitai bot commented Dec 9, 2023

Walkthrough

The recent updates introduce asynchronous capabilities to the testing suite. A new ask_ai_async function has been added for asynchronous AI querying, and an async test function test_extract_async has been implemented to handle asynchronous user detail extraction. These changes suggest a shift towards non-blocking operations, likely to improve test execution efficiency.

Changes

Files Changes
.../test_entities.py Added ask_ai_async function for asynchronous execution.
.../test_extract_users.py Added test_extract_async async test function with parameters for asynchronous user detail extraction.

Poem

🐇 "In the realm of code, where tests run deep,
Async patterns now take the leap.
Queries and tests, with non-blocking cheer,
A rabbit's work, now swift and clear!" 🚀


Tips

Chat with CodeRabbit Bot (@coderabbitai)

  • If you reply to a review comment from CodeRabbit, the bot will automatically respond.
  • To engage with CodeRabbit bot directly around the specific lines of code in the PR, mention @coderabbitai in your review comment
  • Note: Review comments are made on code diffs or files, not on the PR overview.
  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Note: For conversation with the bot, please use the review comments on code diffs or files.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 694414e and 876589e.
Files selected for processing (2)
  • tests/openai/evals/test_entities.py (2 hunks)
  • tests/openai/evals/test_extract_users.py (1 hunks)
Additional comments: 3
tests/openai/evals/test_entities.py (2)
  • 60-75: The addition of ask_ai_async aligns with the PR's objective to introduce asynchronous testing capabilities. The function is correctly defined with the async keyword and uses await for asynchronous API calls.

  • 120-132: The addition of test_extract_async aligns with the PR's objective to enhance the testing suite with asynchronous tests. The function is correctly decorated with pytest.mark.asyncio for async test cases and pytest.mark.parametrize for parameterized testing.

tests/openai/evals/test_extract_users.py (1)
  • 54-82: The addition of the test_extract_async function aligns with the PR's objective to enhance the testing suite with asynchronous capabilities. The use of pytest.mark.parametrize with product(models, test_data, modes) ensures that the test will run for all combinations of models, data, and modes, which is good for thoroughness. The async function is correctly defined with the async keyword, and await is used before the asynchronous call to aclient.chat.completions.create, which is consistent with the async/await pattern in Python. The assertions for name and age are also correctly implemented.

@Anmol6 Anmol6 changed the title More async tests More async eval tests Dec 9, 2023
@Anmol6 Anmol6 closed this Dec 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant